Web Standards

The Twelfth Fourth

Css Tricks - Thu, 07/04/2019 - 9:09am

CSS-Tricks is 12 years old! Firmly into that Early Adolescence stage, I'd say ;) As we do each year, let's reflect upon the past year. I'd better have something to say, right? Otherwise, John Prine would get mad at me.

How the hell can a person go to work in the morning
And come home in the evening and have nothing to say.
- Angel From Montgomery

See the Pen
Fireworks!
by Tim Severien (@timseverien)
on CodePen.

Easily the biggest change this year was design v17

We redesign most years, so it's not terribly shocking I suppose that we did this year, but I think it's fairly apparent that this was a big one. The biggest since v10.

Here's a writeup on v17.

I still get happy emails about it.

The aesthetics of it still feel fresh to me, 6 months later. There are no plans at all yet for what the next version will be. I imagine this one will last a good couple of years with tweaks along the way. I'm always working on it. Just in the last few days, I have several commits cleaning things up, adding little features, and optimizing. That work is never done. v18 might just be a more thorough scrubbing of what is here. Might be a good release to focus on the back-end tech. I've always wanted to try some sort of MVC setup.

In a way, things feel easier.

There is a lot going right around here. We've got a great staff. Our editorial workflow, led by Geoff, has been smooth. There are ebbs and flows of how many great guest posts are in the pipeline, but it never seems to run dry and these days we stay more ahead than we ever have.

We stay quite organized with Notion. In fact, I still use it heavily across all the teams I'm on. It's just as fundamental as Slack and email.

We're still working with BuySellAds as a partner to help us sell advertising and sponsorship partnerships. We've worked with them for ages and they really do a good job with clean ad tech, smooth integration workflows, and finding good companies that want to run campaigns.

On the 10th anniversary I wrote:

If you do all the work, the hope is that you just keep to keep on keeping on. Everyone gets paid for their effort. This is not a hockey-stick growth kind of site. It's a modest publication.

Yep.

Check out a year over year chart from Google Analytics:

I can look at that and celebrate the moments with growth. Long periods of 20% year over year growth, which is cool. Then if you look at just this last month, we're more even or a little bit under 2018 (looking at only pageviews). Good to know, kinda, but I never put much stock in this kind of generic analytics. I'm glad we record them. I would want to know if we started tanking or growing hugely. But we never do. We have long slow steady growth and that is a comfortable place for me.

Thinking on ads

The world of advertising is tightly integrated around here, of course. I'm sure many of you look at this site and go JEEZ, LITTLE HEAVY ON THE ADS, EH? I hope it's not too big a turnoff, as I really try to be tasteful with them. But another thing you should know is that the ad tech is clean. No tracking stuff. No retargetting. No mysterious third-party JavaScript. There might be an impression-tracking pixel here and there, but that's about it. No slew of 100's of requests doing god-knows-what.

That's not by accident. It's clear to me now how to go down that other road, and that road has money on it. Twice as much. But I look at it as what would be short term gains. Nobody is going to be more mad at me than you if I slap 80 tracking scripts on this site, my credibility amongst devs goes out the window along with any hopes of sustaining or growing this site. It's no surprise to me that on sites without developers as an audience, the tendency is to go down the golden road of tracking scripts.

Even the tech is easier.

Just starting in July I've gotten all my sites on Flywheel hosting, and I've written about that here just today. Flywheel is a new sponsor here to the site, and I'm equally excited about that as I am in actually using it. Between using Local for local WordPress development, GitHub for repos, Buddy for deployment, Cloudflare for DNS/CDN... everything just feels smooth and easy right now.

The way I feel about tech at the moment is that nearly anything is doable. Even stuff that feels insurmountable. It's just work. Do a bunch of work, get the thing done.

Fancy posts

One thing that we snuck in this year is the idea of posts that have special design applied to them. The term "Art-directed articles" seems to be the term that has stuck for that, for better or worse, and we've added to that.

There are posts like The Great Divide that I intentionally wanted to make stand out.

And now we've taken that and turned it into a template. The point of an art-directed article is to do something unique, so a template is a little antithetical to that, but I think this strikes a nice middle ground. The template assumes a big full-width header with background image under big title and then is otherwise just a centered column of type on white. The point is to use the template, then apply custom styles on top of it as needed to do something special for the post. I have a good feeling we'll keep using it and have fun with it, and that it won't be too burdensome for future designs.

Elsewhere

Last year at this time I was just settling into living in Bend, Oregon. It still feels that way. I'm in a new house now, that we've bought, and it feels like this is a very permanent living situation. But we're less than a year into the new house so there is plenty of dust to settle. I'm still honeymooning on Bend as I just love it here so much. My daughter is just over a year and a half now so stability is very much what we're after.

Professionally, most of my time is on CodePen, of course. There is a lot of overlap, like the fact that we work with BuySellAds on both sites and often sell across both. Plus working on CSS-Tricks always has me in CodePen anyway ;). Miraculously, Dave Rupert and I almost never miss a week on ShopTalk Show. Going strong after all these years. Never a shortage of stuff to talk about when it comes to websites.

Thank you

A big hearty thanks from me! Y'all reading this site is what makes it possible.

The post The Twelfth Fourth appeared first on CSS-Tricks.

CSS-Tricks on Flywheel

Css Tricks - Thu, 07/04/2019 - 5:50am

I first heard of Flywheel through their product Local, which is a native app for working on WordPress sites. If you ask around for what people use for that kind of work, you'll get all sorts of answers, but an awful lot of very strong recommendations for Local. I've become one of them! We ultimately did a sponsored post for Local, but that's based on the fact that now 100% of my local WordPress development work is done using it and I'm very happy with it.

Now I've taken the next step and moved all my production sites to Flywheel hosting!

Full disclosure here, Flywheel is now a sponsor of CSS-Tricks. I've been wanting to work with them for a while. I've been out to visit them in Omaha! (👋 at Jamie, Christi, Karissa, and everybody I've worked with over there.) Part of our deal includes the hosting. But I was a paying customer and user of Flywheel before this on some sites, and my good experiences there are what made me want to get this sponsorship partnership cooking! There has been big recent news that Flywheel was acquired by WP Engine. I'm also a fan of WP Engine, being also a premium WordPress host that has done innovative things with hosting, so I'm optimistic that a real WordPress hosting powerhouse is being formed and I've got my sites in the right place.

Developing on Local is a breeze

It feels like a breath of fresh air to me, as running all the dev dependencies for WordPress has forever been a bit of a pain in the butt. Sometimes you have it going fine, but then something breaks in the most inscrutable possible way and it takes forever to get going again. Whatever, you know what I mean. At this point, I've been running Local for over a year and have had almost no issues with it.

There are all kinds of features worth checking out here. Here's one that is very likely useful to bigger teams. Say you have a Flywheel account with a bunch of production sites on it. Then a new person starts working with you and they have their own computer. You connect Local to Flywheel, and you can pull down the site and have it ready to work on. That's pretty sweet.

Local doesn't lock you into anything either. You can use Local for local development and literally use nothing else. Local can push a site up to Flywheel hosting too, which I've found to be mighty useful particularly for that first deployment of a new site, but you don't have to use that if you don't want. I'll cover more about workflow below.

Other features that I find worthy of note:

  • Spinning up a new site takes just a second. A quick walkthrough through a wizard where they ask you some login details but otherwise offer smart-but-customizable defaults.
  • Dealing with HTTPS locally is easy. It will create a certificate for you and trust it locally with one click.
  • You can flip on "Live Link", which uses ngrok to create a live, sharable URL to your localhost site. Great for temporarily showing a client or co-worker something without having to move anything.
  • One click to pop open the database in Sequel Pro, my favorite free database tool. Much easier than trying to spin up phpMyAdmin or whatever on the web to manage from there.
Flywheel's Dashboard is so clear

I love the simple UI of Local, and I really like how that same design and spirit carries over into the Flywheel hosting dashboard.

There are so many things the dashboard makes easy:

  • You need an SSL cert? Click some buttons.
  • Wanna force HTTPS? Flip a switch.
  • Wanna convert the site to Multisite? Hit a button.
  • Need to edit the database? There is a UI around it built in.
  • Want a CDN? Toggle a thing.
  • Need to invite a collaborator on a site? Go for it.
  • Need a backup? There are in there, download it or restore to that point.

It's a big deal when everything is simple and works. It means you aren't burning hours fighting with tools and you can use them doing work that pushes you forward.

Workflow

When I set up my new CSS-Tricks workflow, I had Flywheel move the site for me (thanks gang!) (no special treatment either, they'll do that for anybody).

I've got Local already, so my local development process is the same. But I needed to update my deployment workflow for the new hosting. Local can push a site up to Flywheel hosting, but it just zips everything up and sends it all up. Great for first deployment but not perfect for tiny little changes like 95% of the work I do. There is a new Local for Teams feature, which uses what they call MagicSync for deployment, which only deploys changed files. That's very cool, but I like working with a Git-based system, where ultimately merges to master are what trigger deployment of the changed files.

For years I've used Beanstalk for Git-based deployment over SFTP. I still am using Beanstalk for many sites and think it's a great choice, but Beanstalk has the limitation that the Git-repo is basically a private Git repo hosted by Beanstalk itself.

During this change, I needed to switch up what the root of the repo is (more on that in a second) so I wanted to create a new repo. I figured rather than doing that on Beanstalk, I'd make a private GitHub repo and set up deployment from there. There are services like DeployHQ and DeployBot that will work well for that, but I went with Buddy, which has a really really nice UI for managing all this stuff, and is capable of much more than just deployment should I ultimately need that.

Regarding the repo itself, one thing that I've always done with my WordPress sites is just make the repo the whole damn thing starting at the root. I think it's just a legacy/comfort thing. I had some files at the root I wanted to deploy along with everything else and that seemed like the easiest way. In WordPress-land, this isn't usually how it's done. It's more common to have the /wp-content/ folder be the root of the repo, as those are essentially the only files unique to your installation. I can imagine setups where even down to individual themes are repos and deployed alone.

I figured I'd get on board with a more scoped deployment, but also, I didn't have much of a choice. Flywheel literally locks down all WordPress core files, so if your deployment system tries to override them, it will just fail. That actually sounds great to me. There is no reason anyone from the outside should alter those files, might as well totally remove it as an attack vector. Flywheel itself keeps the WordPress version up to date. So I made a new repo with /wp-content/ at the root, and I figured I'd make it on GitHub instead just because that's such an obvious hub of developer activity and keeps my options wide open for deployment choices.

Maybe I'll open source it all one day when I've had a chance to comb through it.

For the same kind of spiritual reasons, during the the move, I moved the DNS over to Cloudflare. This gives me control over DNS from a third-party so it's easy for me to point things where I need them. Kind of a decentralization of concerns. That's not for everyone, but it's great for me on this project. While now I might suffer from Cloudflare outages (rare, but it literally just happened), I benefit from all sorts of additional security and performance that Cloudflare can provide.

So the workflow is Local > GitHub > Buddy > Flywheel.

And the hosting is Cloudflare > Flywheel with image assets on Cloudinary.

And I've got backups from both Flywheel and Jetpack/VaultPress.

The post CSS-Tricks on Flywheel appeared first on CSS-Tricks.

Menus with “Dynamic Hit Areas”

Css Tricks - Wed, 07/03/2019 - 10:48am

Flyout menus! The second you need to implement a menu that uses a hover event to display more menu items, you're in tricky territory. For one, they should work with clicks and taps, too. Without that, you've broken the menu for anyone without a mouse. That doesn't mean you can't also use :hover. When you use a hover state to reveal more content, that means an un-hovering state needs to hide them. Therein lies the problem.

The problem is that if a submenu pops out somewhere on hover, getting your mouse over to it might involve moving it along a fairly narrow corridor. Accidentally move outside that area, the menu can close, and it can be an extremely frustrating UX moment.

We've covered this before in our "Dropdown Menus with More Forgiving Mouse Movement Paths" article.

You can get to the menu item you want, but there are some narrow passages along the way. Many dropdowns are designed such that the submenu where the desired menu item is may close on you when the right area isn't in :hover, or a mouseleave or a mouseout occurs.

The most compelling examples that solve this issue are the ones that involve extra hidden "hit areas." Amazon doesn't really have menus like this anymore (that I can see), and perhaps this is one of the reasons why. But in the past, they've used this hit area technique. We could call them "dynamic hit areas" because they were drawn based on the position of the parent element and the submenus:

I haven't seen a lot of implementations of this lately, but just recently, Hakim El Hattab included a modern implementation of this in his talk at CSS Day 2019. The implementation leverages drawing the areas dynamically with SVG. You don't actually see the hit areas, but they do look like this, thus forming paths for that prevent hover-offs.

I'll include a YouTube embed of the talk starting at that point here:

The way he draws the hit area is so fancy it makes me all kinds of happy:

The live demo of it is up on the Slides.com pattern library thingy.

The post Menus with “Dynamic Hit Areas” appeared first on CSS-Tricks.

Hey, let’s create a functional calendar app with the JAMstack

Css Tricks - Wed, 07/03/2019 - 4:37am

Hey, let's create a functional calendar app with the JAMstack

I’ve always wondered how dynamic scheduling worked so I decided to do extensive research, learn new things, and write about the technical part of the journey. It’s only fair to warn you: everything I cover here is three weeks of research condensed into a single article. Even though it’s beginner-friendly, it’s a healthy amount of reading. So, please, pull up a chair, sit down and let’s have an adventure.

My plan was to build something that looked like Google Calendar but only demonstrate three core features:

  1. List all existing events on a calendar
  2. Create new events
  3. Schedule and email notification based on the date chosen during creation. The schedule should run some code to email the user when the time is right.

Pretty, right? Make it to the end of the article, because this is what we’ll make.

The only knowledge I had about asking my code to run at a later or deferred time was CRON jobs. The easiest way to use a CRON job is to statically define a job in your code. This is ad hoc — statically means that I cannot simply schedule an event like Google Calendar and easily have it update my CRON code. If you are experienced with writing CRON triggers, you feel my pain. If you’re not, you are lucky you might never have to use CRON this way.

To elaborate more on my frustration, I needed to trigger a schedule based on a payload of HTTP requests. The dates and information about this schedule would be passed in through the HTTP request. This means there’s no way to know things like the scheduled date beforehand.

We (my colleagues and I) figured out a way to make this work and — with the help of Sarah Drasner’s article on Durable Functions — I understood what I needed learn (and unlearn for that matter). You will learn about everything I worked in this article, from event creation to email scheduling to calendar listings. Here is a video of the app in action:

You might notice the subtle delay. This has nothing to do with the execution timing of the schedule or running the code. I am testing with a free SendGrid account which I suspect have some form of latency. You can confirm this by testing the serverless function responsible without sending emails. You would notice that the code runs at exactly the scheduled time.

Tools and architecture

Here are the three fundamental units of this project:

  1. React Frontend: Calendar UI, including the UI to create, update or delete events.
  2. 8Base GraphQL: A back-end database layer for the app. This is where we will store, read and update our date from. The fun part is you won’t write any code for this back end.
  3. Durable Functions: Durable functions are kind of Serverless Functions that have the power of remembering their state from previous executions. This is what replaces CRON jobs and solves the ad hoc problem we described earlier.

See the Pen
durable-func1
by Chris Nwamba (@codebeast)
on CodePen.

The rest of this post will have three major sections based on the three units we saw above. We will take them one after the other, build them out, test them, and even deploy the work. Before we get on with that, let’s setup using a starter project I made to get us started.

Project Repo

Getting Started

You can set up this project in different ways — either as a full-stack project with the three units in one project or as a standalone project with each unit living in it's own root. Well, I went with the first because it’s more concise, easier to teach, and manageable since it’s one project.

The app will be a create-react-app project and I made a starter for us to lower the barrier to set up. It comes with supplementary code and logic that we don’t need to explain since they are out of the scope of the article. The following are set up for us:

  1. Calendar component
  2. Modal and popover components for presenting event forms
  3. Event form component
  4. Some GraphQL logic to query and mutate data
  5. A Durable Serverless Function scaffold where we will write the schedulers

Tip: Each existing file that we care about has a comment block at the top of the document. The comment block tells you what is currently happening in the code file and a to-do section that describes what we are required to do next.

Start by cloning the starter form Github:

git clone -b starter --single-branch https://github.com/christiannwamba/calendar-app.git

Install the npm dependencies described in the root package.json file as well as the serverless package.json:

npm install Orchestrated Durable Functions for scheduling

There are two words we need to get out of the way first before we can understand what this term is — orchestration and durable.

Orchestration was originally used to describe an assembly of well-coordinated events, actions, etc. It is heavily borrowed in computing to describe a smooth coordination of computer systems. The key word is coordinate. We need to put two or more units of a system together in a coordinated way.

Durable is used to describe anything that has the outstanding feature of lasting longer.

Put system coordination and long lasting together, and you get Durable Functions. This is the most powerful feature if Azure’s Serverless Function. Durable Functions based in what we now know have these two features:

  1. They can be used to assemble the execution of two or more functions and coordinate them so race conditions do not occur (orchestration).
  2. Durable Functions remember things. This is what makes it so powerful. It breaks the number one rule of HTTP: stateless. Durable functions keep their state intact no matter how long they have to wait. Create a schedule for 1,000,000 years into the future and a durable function will execute after one million years while remembering the parameters that were passed to it on the day of trigger. That means Durable Functions are stateful.

These durability features unlock a new realm of opportunities for serverless functions and that is why we are exploring one of those features today. I highly recommend Sarah’s article one more time for a visualized version of some of the possible use cases of Durable Functions.

I also made a visual representation of the behavior of the Durable Functions we will be writing today. Take this as an animated architectural diagram:

A data mutation from an external system (8Base) triggers the orchestration by calling the HTTP Trigger. The trigger then calls the orchestration function which schedules an event. When the time for execution is due, the orchestration function is called again but this time skips the orchestration and calls the activity function. The activity function is the action performer. This is the actual thing that happens e.g. "send email notification".

Create orchestrated Durable Functions

Let me walk you through creating functions using VS Code. You need two things:

  1. An Azure account
  2. VS Code

Once you have both setup, you need to tie them together. You can do this using a VS Code extension and a Node CLI tool. Start with installing the CLItool:

npm install -g azure-functions-core-tools # OR brew tap azure/functions brew install azure-functions-core-tools

Next, install the Azure Function extension to have VS Code tied to Functions on Azure. You can read more about setting up Azure Functions from my previous article.

Now that you have all the setup done, let’s get into creating these functions. The functions we will be creating will map to the following folders.

Folder Function schedule Durable HTTP Trigger scheduleOrchestrator Durable Orchestration sendEmail Durable Activity

Start with the trigger.

  1. Click on the Azure extension icon and follow the image below to create the schedule function
  2. Since this is the first function, we chose the folder icon to create a function project. The icon after that creates a single function (not a project).
  3. Click Browse and create a serverless folder inside the project. Select the new serverless folder.
  4. Select JavaScript as the language. If TypeScript (or any other language) is your jam, please feel free.
  5. Select Durable Functions HTTP starter. This is the trigger.
  6. Name the first function as schedule

Next, create the orchestrator. Instead of creating a function project, create a function instead.

  1. Click on the function icon:
  2. Select Durable Functions orchestrator.
  3. Give it a name, scheduleOrchestrator and hit Enter.
  4. You will be asked to select a storage account. Orchestrator uses storage to preserve the state of a function-in-process.
  5. Select a subscription in your Azure account. In my case, I chose the free trial subscription.
  6. Follow the few remaining steps to create a storage account.

Finally, repeat the previous step to create an Activity. This time, the following should be different:

  • Select Durable Functions activity.
  • Name it sendEmail.
  • No storage account will be needed.
Scheduling with a durable HTTP trigger

The code in serverless/schedule/index.js does not need to be touched. This is what it looks like originally when the function is scaffolded using VS Code or the CLI tool.

const df = require("durable-functions"); module.exports = async function (context, req) { const client = df.getClient(context); const instanceId = await client.startNew(req.params.functionName, undefined, req.body); context.log(`Started orchestration with ID = '${instanceId}'.`); return client.createCheckStatusResponse(context.bindingData.req, instanceId); };

What is happening here?

  1. We’re creating a durable function on the client side that is based on the context of the request.
  2. We’re calling the orchestrator using the client's startNew() function. The orchestrator function name is passed as the first argument to startNew() via the params object. A req.body is also passed to startNew() as third argument which is forwarded to the orchestrator.
  3. Finally, we return a set of data that can be used to check the status of the orchestrator function, or even cancel the process before it's complete.

The URL to call the above function would look like this:

http://localhost:7071/api/orchestrators/{functionName}

Where functionName is the name passed to startNew. In our case, it should be:

//localhost:7071/api/orchestrators/scheduleOrchestrator

It’s also good to know that you can change how this URL looks.

Orchestrating with a Durable Orchestrator

The HTTP trigger startNew call calls a function based on the name we pass to it. That name corresponds to the name of the function and folder that holds the orchestration logic. The serverless/scheduleOrchestrator/index.js file exports a Durable Function. Replace the content with the following:

const df = require("durable-functions"); module.exports = df.orchestrator(function* (context) { const input = context.df.getInput() // TODO -- 1 // TODO -- 2 });

The orchestrator function retrieves the request body from the HTTP trigger using context.df.getInput().

Replace TODO -- 1 with the following line of code which might happen to be the most significant thing in this entire demo:

yield context.df.createTimer(new Date(input.startAt))

What this line does use the Durable Function to create a timer based on the date passed in from the request body via the HTTP trigger.

When this function executes and gets here, it will trigger the timer and bail temporarily. When the schedule is due, it will come back, skip this line and call the following line which you should use in place of TODO -- 2.

return yield context.df.callActivity('sendEmail', input);

The function would call the activity function to send an email. We are also passing a payload as the second argument.

This is what the completed function would look like:

const df = require("durable-functions"); module.exports = df.orchestrator(function* (context) { const input = context.df.getInput() yield context.df.createTimer(new Date(input.startAt)) return yield context.df.callActivity('sendEmail', input); }); Sending email with a durable activity

When a schedule is due, the orchestrator comes back to call the activity. The activity file lives in serverless/sendEmail/index.js. Replace what’s in there with the following:

const sgMail = require('@sendgrid/mail'); sgMail.setApiKey(process.env['SENDGRID_API_KEY']); module.exports = async function(context) { // TODO -- 1 const msg = {} // TODO -- 2 return msg; };

It currently imports SendGrid’s mailer and sets the API key. You can get an API Key by following these instructions.

I am setting the key in an environmental variable to keep my credentials safe. You can safely store yours the same way by creating a SENDGRID_API_KEY key in serverless/local.settings.json with your SendGrid key as the value:

{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "<<AzureWebJobsStorage>", "FUNCTIONS_WORKER_RUNTIME": "node", "SENDGRID_API_KEY": "<<SENDGRID_API_KEY>" } }

Replace TODO -- 1 with the following line:

const { email, title, startAt, description } = context.bindings.payload;

This pulls out the event information from the input from the orchestrator function. The input is attached to context.bindings. payload can be anything you name it so go to serverless/sendEmail/function.json and change the name value to payload:

{ "bindings": [ { "name": "payload", "type": "activityTrigger", "direction": "in" } ] }

Next, update TODO -- 2 with the following block to send an email:

const msg = { to: email, from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' }, subject: `Event: ${title}`, html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>` }; sgMail.send(msg); return msg;

Here is the complete version:

const sgMail = require('@sendgrid/mail'); sgMail.setApiKey(process.env['SENDGRID_API_KEY']); module.exports = async function(context) { const { email, title, startAt, description } = context.bindings.payload; const msg = { to: email, from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' }, subject: `Event: ${title}`, html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>` }; sgMail.send(msg); return msg; }; Deploying functions to Azure

Deploying functions to Azure is easy. It’s merely a click away from the VS Code editor. Click on the circled icon to deploy and get a deploy URL:

Still with me this far in? You’re making great progress! It’s totally OK to take a break here, nap, stretch or get some rest. I definitely did while writing this post.

Data and GraphQL layer with 8Base

My easiest description and understanding of 8Base is "Firebase for GraphQL." 8Base is a database layer for any kind of app you can think of and the most interesting aspect of it is that it’s based on GraphQL.

The best way to describe where 8Base fits in your stack is to paint a picture of a scenario.

Imagine you are a freelance developer with a small-to-medium scale contract to build an e-commerce store for a client. Your core skills are on the web so you are not very comfortable on the back end. though you can write a bit of Node.

Unfortunately, e-commerce requires managing inventories, order management, managing purchases, managing authentication and identity, etc. "Manage" at a fundamental level just means data CRUD and data access.

Instead of the redundant and boring process of creating, reading, updating, deleting, and managing access for entities in our backend code, what if we could describe these business requirements in a UI? What if we can create tables that allow us to configure CRUD operations, auth and access? What if we had such help and only focus on building frontend code and writing queries? Everything we just described is tackled by 8Base

Here is an architecture of a back-end-less app that relies on 8Base as it’s data layer:

Create an 8Base table for events storage and retrieval

The first thing we need to do before creating a table is to create an account. Once you have an account, create a workspace that holds all the tables and logic for a given project.

Next, create a table, name the table Events and fill out the table fields.

We need to configure access levels. Right now, there’s nothing to hide from each user, so we can just turn on all access to the Events table we created:

Setting up Auth is super simple with 8base because it integrates with Auth0. If you have entities that need to be protected or want to extend our example to use auth, please go wild.

Finally, grab your endpoint URL for use in the React app:

Testing GraphQL Queries and mutation in the playground

Just to be sure that we are ready to take the URL to the wild and start building the client, let’s first test the API with a GraphQL playground and see if the setup is fine. Click on the explorer.

Paste the following query in the editor.

query { eventsList { count items { id title startAt endAt description allDay email } } }

I created some test data through the 8base UI and I get the result back when I run they query:

You can explore the entire database using the schema document on the right end of the explore page.

Calendar and event form interface

The third (and last) unit of our project is the React App which builds the user interfaces. There are four major components making up the UI and they include:

  1. Calendar: A calendar UI that lists all the existing events
  2. Event Modal: A React modal that renders EventForm component to create a component
  3. Event Popover: Popover UI to read a single event, update event using EventForm or delete event
  4. Event Form: HTML form for creating new event

Before we dive right into the calendar component, we need to setup React Apollo client. The React Apollo provider empowers you with tools to query a GraphQL data source using React patterns. The original provider allows you to use higher order components or render props to query and mutate data. We will be using a wrapper to the original provider that allows you query and mutate using React Hooks.

In src/index.js, import the React Apollo Hooks and the 8base client in TODO -- 1:

import { ApolloProvider } from 'react-apollo-hooks'; import { EightBaseApolloClient } from '@8base/apollo-client';

At TODO -- 2, configure the client with the endpoint URL we got in the 8base setup stage:

const URI = 'https://api.8base.com/cjvuk51i0000701s0hvvcbnxg'; const apolloClient = new EightBaseApolloClient({ uri: URI, withAuth: false });

Use this client to wrap the entire App tree with the provider on TODO -- 3:

ReactDOM.render( <ApolloProvider client={apolloClient}> <App /> </ApolloProvider>, document.getElementById('root') ); Showing events on the calendar

The Calendar component is rendered inside the App component and the imports BigCalendar component from npm. Then :

  1. We render Calendar with a list of events.
  2. We give Calendar a custom popover (EventPopover) component that will be used to edit events.
  3. We render a modal (EventModal) that will be used to create new events.

The only thing we need to update is the list of events. Instead of using the static array of events, we want to query 8base for all store events.

Replace TODO -- 1 with the following line:

const { data, error, loading } = useQuery(EVENTS_QUERY);

Import the useQuery library from npm and the EVENTS_QUERY at the beginning of the file:

import { useQuery } from 'react-apollo-hooks'; import { EVENTS_QUERY } from '../../queries';

EVENTS_QUERY is exactly the same query we tested in 8base explorer. It lives in src/queries and looks like this:

export const EVENTS_QUERY = gql` query { eventsList { count items { id ... } } } `;

Let’s add a simple error and loading handler on TODO -- 2:

if (error) return console.log(error); if (loading) return ( <div className="calendar"> <p>Loading...</p> </div> );

Notice that the Calendar component uses the EventPopover component to render a custom event. You can also observe that the Calendar component file renders EventModal as well. Both components have been setup for you, and their only responsibility is to render EventForm.

Create, update and delete events with the event form component

The component in src/components/Event/EventForm.js renders a form. The form is used to create, edit or delete an event. At TODO -- 1, import useCreateUpdateMutation and useDeleteMutation:

import {useCreateUpdateMutation, useDeleteMutation} from './eventMutationHooks'
  • useCreateUpdateMutation: This mutation either creates or updates an event depending on whether the event already existed.
  • useDeleteMutation: This mutation deletes an existing event.

A call to any of these functions returns another function. The function returned can then serve as an even handler.

Now, go ahead and replace TODO -- 2 with a call to both functions:

const createUpdateEvent = useCreateUpdateMutation( payload, event, eventExists, () => closeModal() ); const deleteEvent = useDeleteMutation(event, () => closeModal());

These are custom hooks I wrote to wrap the useMutation exposed by React Apollo Hooks. Each hook creates a mutation and passes mutation variable to the useMutation query. The blocks that look like the following in src/components/Event/eventMutationHooks.js are the most important parts:

useMutation(mutationType, { variables: { data }, update: (cache, { data }) => { const { eventsList } = cache.readQuery({ query: EVENTS_QUERY }); cache.writeQuery({ query: EVENTS_QUERY, data: { eventsList: transformCacheUpdateData(eventsList, data) } }); //.. } }); Call the Durable Function HTTP trigger from 8Base

We have spent quite some time in building the serverless structure, data storage and UI layers of our calendar app. To recap, the UI sends data to 8base for storage, 8base saves data and triggers the Durable Function HTTP trigger, the HTTP trigger kicks in orchestration and the rest is history. Currently, we are saving data with mutation but we are not calling the serverless function anywhere in 8base.

8base allows you to write custom logic which is what makes it very powerful and extensible. Custom logic are simple functions that are called based on actions performed on the 8base database. For example, we can set up a logic to be called every time a mutation occurs on a table. Let’s create one that is called when an event is created.

Start by installing the 8base CLI:

npm install -g 8base

On the calendar app project run the following command to create a starter logic:

8base init 8base

8base init command creates a new 8base logic project. You can pass it a directory name which in this case we are naming the 8base logic folder, 8base — don’t get it twisted.

Trigger scheduling logic

Delete everything in 8base/src and create a triggerSchedule.js file in the src folder. Once you have done that, drop in the following into the file:

const fetch = require('node-fetch'); module.exports = async event => { const res = await fetch('<HTTP Trigger URL>', { method: 'POST', body: JSON.stringify(event.data), headers: { 'Content-Type': 'application/json' } }) const json = await res.json(); console.log(event, json) return json; };

The information about the GraphQL mutation is available on the event object as data.

Replace <HTTP Trigger URL> with the URL you got after deploying your function. You can get the URL by going to the function in your Azure URL and click "Copy URL."

You also need to install the node-fetch module, which will grab the data from the API:

npm install --save node-fetch 8base logic configuration

The next thing to do is tell 8base what exact mutation or query that needs to trigger this logic. In our case, a create mutation on the Events table. You can describe this information in the 8base.yml file:

functions: triggerSchedule: handler: code: src/triggerSchedule.js type: trigger.after operation: Events.create

In a sense, this is saying, when a create mutation happens on the Events table, please call src/triggerSchedule.js after the mutation has occurred.

We want to deploy all the things

Before anything can be deployed, we need to login into the 8Base account, which we can do via command line:

8base login

Then, let’s run the deploy command to send and set up the app logic in your workspace instance.

8base deploy Testing the entire flow

To see the app in all its glory, click on one of the days of the calendar. You should get the event modal containing the form. Fill that out and put a future start date so we trigger a notification. Try a date more than 2-5 mins from the current time because I haven’t been able to trigger a notification any faster than that.

Yay, go check your email! The email should have arrived thanks to SendGrid. Now we have an app that allows us to create events and get notified with the details of the event submission.

The post Hey, let’s create a functional calendar app with the JAMstack appeared first on CSS-Tricks.

Haunted: Hooks for Web Components

Css Tricks - Wed, 07/03/2019 - 4:36am

I was just chatting with Dave and he told me about Haunted. It's hooks, but for native web components! Pretty cool. I think the existence of stuff like this makes using web components more and more palatable — particularly in that totally-native no-build-step-needed-at-all kinda way.

I get that there are all sorts of issues with web components, but the things that typically turn me away from them are a lack of nice templating and rerendering and no state management.

But we can knock those two out right quick these days...

First, making a component like <my-app> is perfectly comfortable:

import { html } from "https://unpkg.com/lit-html/lit-html.js"; import { component } from "https://unpkg.com/haunted/haunted.js"; function App() { return html` <div class="module"> Hello, World! </div> `; } customElements.define("my-app", component(App));

Then we could add some state with hooks:

import { html } from "https://unpkg.com/lit-html/lit-html.js"; import { component, useState} from "https://unpkg.com/haunted/haunted.js"; function App() { const [name, setName] = useState("Chris"); return html` <div class="module"> Hello, ${name}! </div> `; } customElements.define("my-app", component(App));

The CodePen Challenge this week is using the Star Wars API, so let's make a fetch request and use that to fill state. That's a great use case for useEffect.

import { html } from "https://unpkg.com/lit-html/lit-html.js"; import { component, useState, useEffect } from "https://unpkg.com/haunted/haunted.js"; function App() { const [planets, setPlanets] = useState([]); useEffect(() => { fetch('https://swapi.co/api/planets/?page=2') .then(response => { return response.json(); }) .then(data => { let planets = data.results; // remove ones with no diameters planets = planets.filter(planet => planet.diameter !== "0"); setPlanets(planets); }); }, []); return html` <style> /* Shadow DOM styles */ </style> <div class="all-planets"> ${planets.map(planet => html` <div class="planet" style="--dia: ${planet.diameter}px"> <span class="planet-name"> ${planet.name} </span> </div> `)} </div> `; } customElements.define("my-app", component(App));

That's a proper little web component!

See the Pen
Star Wars API with Haunted.js
by Chris Coyier (@chriscoyier)
on CodePen.

The post Haunted: Hooks for Web Components appeared first on CSS-Tricks.

Graphical User Interfaces for Git

Css Tricks - Tue, 07/02/2019 - 12:41pm

Git is command-line-driven software, but that doesn't mean you have to use the command line to make it work. There are lots of options! Some of the deepest programmer nerds I know prefer to use GUIs for Git (Graphic
User Interface, or you know, software you can see things and click stuff), and some near pure-designers I know prefer working with the command line for Git. Swear to Git.

Lemme round up what look like the major players for Git GUIs these days.

No matter how much you love the CLI, don't GUI-shame. Lots of perfectly amazing programmers like working with GUIs, and it's perfectly fine.

There's some weird gatekeeping tendencies centered around the command line. #DevDiscuss

— Ben Halpern &#x1f917; (@bendhalpern) November 28, 2018

Tower

I've used Tower for ages and it's the one used the most. I'm not sure the exact release dates of all these, but I feel like Tower was an early player here. They've been around a long time and continuously improve, which I always respect.



Fork

It's free and actively developed, incredibly.



GitHub Desktop

This is a 2.0 of the original GitHub Desktop. I had some gripes with the 1.0 version in that its terminology was weird (to me) and seemed to vastly deviate from Git, which was more confusing than it was worth (again, to me). This version cleans most of that up. It's deeply integrated into GitHub so it makes GitHubb-y things (e.g. pull requests) feel like first-class citizens, but it will still happily work with any Git repo.



GitKraken

I'm pretty intrigued by this one. Upgrading (monthly cost) to get the in-app merge conflict tool seems worth it, but you also have to upgrade to access private repos. It seems highly feature rich, but I think my favorite part is the dark-with-rainbow-accent-colors theme.



Sourcetree

You might be compelled by Sourcetree if you're a big Bitbucket user because they are both Atlassian products. I know it works for any Git repo though. I imagine there is some smooth Bitbucket integration stuff with this, similar to the GitHub/GitHub Desktop connection.



Coda

You don't really think of Coda as a version control tool (it's more of a direct-to-FTP thing), and even though I'd argue the support for it is fairly half-baked, it does work! Seems likely the next evolution of Coda will address this.



VS Code

Having version control right in your IDE like this, to me, feels like kind of a tweener between GUI and CLI. There are a lot of features here, but it's not really a full-blown GUI to me, but you've got a terminal built in right there so it almost encourages that. A lot of Git usage is pretty basic pulling, committing, and pushing — so having this right within the app is kinda sweet.

(I imagine there are lots of other IDEs that offer version control features. PHPStorm, etc.)



The post Graphical User Interfaces for Git appeared first on CSS-Tricks.

Toast

Css Tricks - Tue, 07/02/2019 - 12:40pm

One day, all the sudden, I started hearing jokes about toast. I had no idea what the context was. I assumed some friends just got started telling toast jokes, which isn't rare by any means. But it turns out it was a whole thing. It got me thinking, jeez, if I can't keep up with this stuff when it's kinda my job, it must be extra tough for people who actually do work for a living.

Anyway. Thankfully Jeremy summed it up nicely:

First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.

Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast.

The drama around it, hence the cause of all the jokes and such, was the fact that it felt like it came out of nowhere and was Chrome strongarming a feature through the standards process, or perhaps kinda skipping that process. Terence's humorous post digs into that a bit more.

I'm not sure if Google is actually doing anything nefarious here. It's behind a flag, so I guess the point of that is to explore and research and stuff. Feels very similar to kv:storage to me, a "native module" much like a "native custom element."

But we should be extra watchful about stuff like this. If any browser goes rogue and just starts shipping stuff, web standards is over. Life for devs gets a lot harder and the web gets a lot worse. The stakes are high. And it's not going to happen overnight, it's going to happen with little tiny things like this. Keep that blue beanie on.

Regarding the element itself, it's always a bit surprising to me to see what gets legs as far as new HTML elements. Toasts just seem like a positioned <dialog> to me, but I haven't participated in any research or anything. They are popular enough for Bootstrap to have 'em:

See the Pen
Bootstrap Toasts
by Chris Coyier (@chriscoyier)
on CodePen.

I would have guessed something like dropdown menus or tabs would have been the strongest contenders for "native" web components.

The post Toast appeared first on CSS-Tricks.

Restricting a (pseudo) element to its parent’s border-box

Css Tricks - Tue, 07/02/2019 - 4:19am

Have you ever wanted to ensure that nothing of a (pseudo) element gets displayed outside its parent's border-box? In case you're having trouble picturing what that looks like, let's say we wanted to get the following result with minimal markup and avoiding brittle CSS.

The desired result.

This means we cannot add any elements just for visual purposes and we cannot create shapes out of multiple pieces, whether that's directly or via masks. We also want to avoid long, long lists of anything (think something like tens of background layers or box shadows or points inside a polygon() function) in our generated code because, while the results can be fun, it's not really practical to do something like that!

How do you think we can achieve this, given the parts the arrows point towards? Fancy giving it a try before checking my solution below? It's one of those things that seems simple at first, but once you actually try it, you discover it's much trickier.

Markup

Each item is a paragraph (<p>) element. I was lazy and generated them with Pug out of an array of objects which hold the item's gradient stop list and its paragraph text:

- var data = [ - { - slist: ['#ebac79', '#d65b56'], - ptext: 'Pancake muffin chocolate syrup brownie.' - }, - { - slist: ['#90cbb7', '#2fb1a9'], - ptext: 'Cake lemon berry muffin plum macaron.' - }, - { - slist: ['#8a7876', '#32201c'], - ptext: 'Wafer apple tart pie muffin gingerbread.' - }, - { - slist: ['#a6c869', '#37a65a'], - ptext: 'Liquorice plum topping chocolate lemon.' - } - ].reverse(); - var n = data.length; while n-- p(style=`--slist: ${data[n].slist}`) #{data[n].ptext}

This generates the following unspectacular HTML:

<p style='--slist: #ebac79, #d65b56'>Pancake muffin chocolate syrup brownie.</p> <p style='--slist: #90cbb7, #2fb1a9'>Cake lemon berry muffin plum macaron.</p> <p style='--slist: #8a7876, #32201c'>Wafer apple tart pie muffin gingerbread.</p> <p style='--slist: #a6c869, #37a65a'>Liquorice plum topping chocolate lemon.</p> Basic styles

For the paragraph elements, we set a font, dimensions and a border-radius that's half the height value:

$w: 26em; $h: 5em; p { width: $w; height: $h; border-radius: .5*$h; background: silver; font: 1.5em/ 1.375 trebuchet ms, verdana, sans-serif; }

We've also set a dummy background so we can see their limits:

See the Pen by thebabydino (@thebabydino) on CodePen.

Paragraph background

We have three top-to-bottom gradients, which means we can place each of them within the limits of a different layout box: the top gradient layer is limited to the content-box, the middle one to the padding-box and the bottom one to the border-box. If you need an in-depth refresher on this technique, check out this article, but the basic idea is you picture these layout boxes as nested rectangles.

The layout boxes. (Demo)

This is pretty much how browser DevTools presents them.

The layout boxes as shown by Chrome (left) vs. Firefox (middle) vs. Edge (right).

You may be wondering why we wouldn't layer gradients with different sizes given by their background-size and that have background-repeat: no-repeat. Well, this is because we only get rectangles without rounded corners this way.

Using the background-clip method, if we have a border-radius, our background layers will follow that. Meanwhile, the actual border-radius we set is being used to round the corners of the border-box; that same radius minus the border-width rounding the corners of the padding-box. Then we're subtracting the padding as well to round the corners of the content-box.

See the Pen by thebabydino (@thebabydino) on CodePen.

So let's code this!

We set a transparent border and a padding. We make sure they get subtracted from the dimensions we've set by switching to box-sizing: border-box. Finally, we layer three gradients: the top one restricted to the content-box, the middle one to the padding-box and the bottom one to the border-box.

p { /* same styles as before */ display: flex; align-items: center; justify-content: space-between; box-sizing: border-box; border: solid $b transparent; padding: $p; background: linear-gradient(#dbdbdb, #fff) content-box, linear-gradient(var(--slist)) padding-box, linear-gradient(#fff, #dcdcdc) border-box; text-indent: 1em; }

We've also set a flex layout and a text-indent to move the text content away from the banner edges:

See the Pen by thebabydino (@thebabydino) on CodePen.

Numbering

Before we move on to the tricky part, let's get the paragraph numbers out of the way!

We add them using a counter we set as the content value on the :after pseudo-element. We first make this :after a square whose edge length equals the paragraph height (which is $h) minus the top and bottom border-width (both equal to $b). Then we turn this square into a circle by setting border-radius: 50% on it. We make it inherit its parent's box-sizing and border and we then set its background in a similar manner as we did for its parent.

$d: $h - 2*$b; p { /* same styles as before */ counter-increment: $d; &:after { box-sizing: inherit; border: inherit; width: $d; height: $d; border-radius: 50%; box-shadow: inset 0 0 1px 1px #efefef, inset 0 #{-$b} rgba(#000, .1); background: linear-gradient(var(--slist)) padding-box, linear-gradient(#d0d0d0, #e7e7e7) border-box; color: #fff; content: counter(c, decimal-leading-zero); } }

Alright, this is starting to look like something!

See the Pen by thebabydino (@thebabydino) on CodePen.

We still need to make a few tweaks to the CSS of this :after pseudo-element — a margin-right that's minus its parent's padding and adjustments to its inner layout so we have the number dead in the middle. That's pretty much it for the numbering part!

p { /* same styles as before */ &:after { /* same styles as before */ display: grid; place-content: center; margin-right: -$p; text-indent: 0; } }

We're getting closer!

See the Pen by thebabydino (@thebabydino) on CodePen.

The tricky part

And we're finally here!

We start off by using the :before pseudo-element, absolutely positioning it on the right side and making it a square whose edge length equals its parent's height:

p { /* same styles as before */ position: relative; outline: solid 2px orange; &:before { position: absolute; right: -$b; width: $h; height: $h; outline: solid 2px purple; content: ''; } }

We've also given both this pseudo-element and its parent some dummy outlines so that we can check the alignment:

See the Pen by thebabydino (@thebabydino) on CodePen.

Alright, we now give this :before a dummy background, rotate it, and then give it a border-radius and a nice box-shadow:

p { /* same styles as before */ &:before { /* same styles as before */ border-radius: $b; transform: rotate(45deg); box-shadow: 0 0 7px rgba(#000, .2); background: linear-gradient(-45deg, orange, purple); } }

And we get the following result!

See the Pen by thebabydino (@thebabydino) on CodePen.

We now have a small problem: the :before pseudo-element is absolutely positioned and is now on top of the :after pseudo-elements that holds the numbering! We can fix this by setting position: relative on the :after pseudo-element.

See the Pen by thebabydino (@thebabydino) on CodePen.

Here's where things start getting interesting!

Restricting the background gradient

First, we need to set the stop positions on the gradient of our :before pseudo-element such that they match the bottom and top edges of the parent. This is because we want to have a certain hex value along the top edge of the parent and a certain hex value along the bottom edge of the parent.

The stop positions we need to compute.

Since we've rotated our square :before by 45°, its top-left corner now points upward (and, conversely, its bottom-right corner points downward).

How rotation changes position of the square's corners.

A gradient to the top-left corner of a square is a gradient in the -45° direction (because the 0° angle is at 12 o'clock and the positive direction is, just as for transforms, the clockwise one). A gradient to a corner means the 100% point is located in that corner)

How rotation changes the gradient direction.

The 50% line of a gradient always passes through the midpoint (the point at the intersection of the diagonals) of the gradient box.

The gradient box is the box within which we paint the gradient and whose size is given by the background-size. Since we haven't set a background-size, the default for gradients is to use the entire box defined by background-origin, which is the padding-box by default. Since we don't have a border or a padding on our :before pseudo-element, it results that all three boxes (content-box, padding-box and border-box) are equal in space between them and equal in proportion to the gradient box.

In our case, we have the following lines perpendicular to the direction of the -45°-pointing gradient line:

Getting the relevant stop positions.
  • the 0% line, passing through the bottom right corner of the :before
  • the bottom edge of the pseudo-element's paragraph parent
  • the 50% line which splits our square diagonally into two mirrored right isosceles triangles; given the way we've aligned our paragraph and its pseudo-elements, this line is also a midline for the paragraph itself, splitting it into two halves, each with a height equal to half the paragraph's height ($h).
  • the top edge of the pseudo-element's paragraph parent
  • the 100% line, passing through the top left corner of the :before

This means we need to restrict the -45°-pointing gradient on our :before pseudo-element between calc(50% - #{.5*$h}) (corresponding to the paragraph's bottom edge) and calc(50% + #{.5*$h}) (corresponding to the paragraph's top edge).

Sure enough, this does it!

linear-gradient(-45deg, orange calc(50% - #{.5*$h}), purple calc(50% + #{.5*$h}))

See the Pen by thebabydino (@thebabydino) on CodePen.

Adding a sharp transition from and to transparent at these stop positions makes it a lot more obvious they're the correct ones:

linear-gradient(-45deg, transparent calc(50% - #{.5*$h}), orange 0, purple calc(50% + #{.5*$h}), transparent 0)

See the Pen by thebabydino (@thebabydino) on CodePen.

Restricting the pseudo-element itself

The next step is to prevent the :before pseudo-element from spilling outside the boundaries of its parent.

That's easy, right? Just set overflow: hidden on the paragraph!

Well, let's do that!

This is the result we get:

See the Pen by thebabydino (@thebabydino) on CodePen.

Oops, this is not what we were going for!

What we got with overflow: hidden (left) vs. what we want (right).

The problem is overflow: hidden cuts out everything outside an element's padding-box, but what we want here is to cut the parts of the :before pseudo-element that are outside the border-box, which is larger than the padding-box in our case because we have a non-zero border we cannot ditch (and solve the issue by making the border-box equal to the padding-box) because we need three background layers on our paragraph: the top one covering the content-box, the middle one covering the padding-box and the bottom one covering the border-box.

The solution? Well, if you've taken a peek at the tags, you've probably guessed by now: use clip-path instead!

Pretty much every article and demo using clip-path out there makes use of either an SVG reference or the polygon() shape function, but these are not the only options we have!

Another possible shape function (and the one we'll be using here) is inset(). This function specifies a clipping rectangle defined by the distances from the top, right, bottom and left edges. Edges of what? Well, by default1, that's the edges of the border-box, which is exactly what we need here!

How the inset() function works. (Demo)

So let's ditch overflow: hidden and use clip-path: inset(0) instead. This is the result we get:

See the Pen by thebabydino (@thebabydino) on CodePen.

This is better, but not quite what we want because it doesn't take into account the paragraph's border-radius. Fortunately, inset() also lets us specify a rounding that can take any border-radius value we want. No joke, any valid border-radius value works — for example, this:

clip-path: inset(0 round 15% 75px 35vh 13vw/ 3em 5rem 29vmin 12.5vmax)

We only need something much simpler though:

$r: .5*$h; p { /* same styles as before */ border-radius: $r; clip-path: inset(0 round $r) }

And now we finally get the result we wanted:

See the Pen by thebabydino (@thebabydino) on CodePen.

Final touches

Since we don't want a purple-orange gradient on the :before, we replace those with the actual values we need. We then place the paragraphs in the middle because that looks better. Finally, we give our paragraphs a shadow by setting a drop-shadow() on the body (we cannot use box-shadow on the paragraphs themselves because we've used clip-path, which clips out the box-shadow so we wouldn't see it anyway). And that's it!

See the Pen by thebabydino (@thebabydino) on CodePen.

  1. We should be able to change this <geometry-box> value, but Chrome does not implement this part of the spec. There is an issue open for this which you can star or where you can leave a comment with your use cases for being able to change the default value.

The post Restricting a (pseudo) element to its parent’s border-box appeared first on CSS-Tricks.

Tips for rolling your own lazy loading

Css Tricks - Mon, 07/01/2019 - 4:16am

You may have heard (or even issued the call) that “we can just use lazy loading!” when looking for a way to slim down a particularly heavy web page.

Lazy loading is a popular technique for gradually requesting images as they come into view, rather than all at once after the HTML of the page has been parsed. It can reduce the initial page weight, and help us hit our performance budgets by requesting images when they're needed.

It can be effective. But it also comes with some baggage of its own. We’ll get to that! In fact, Rahul Nanwani did an extensive write-up that hits several lazy-loading methods and illustrates just how complex some are.

In this post, we’ll look at an implementation that's already been covered in brief detail in this post by Preethi. We're going to expand on that so you can add your own implementation of lazy loading to your site as I’ve done on this little demo site.

We’ll cover these topics along the way:

Here’s what we’ll be building:

Demo Site

GitHub Repo Why not native lazy loading?

Currently, lazy loading is not something browsers can do for us natively. Although that looks set to change soon for some browsers with the launch of Chrome 75 which aims to bring lazy loading support for images and iframes. Until that time (and beyond, if we are to play nicely with other browsers - which we should) lazy loading is implemented using JavaScript. There are a number of libraries and frameworks out there to help.

Some static site generators, libraries, and frameworks include utilities to provide this capability “out of the box”, which proves popular as people look for built-in ways to include this feature in their sites. But I’ve also noticed a trend where some chose to adopt entire libraries or frameworks in order to gain access to just this feature. As a thrifty, performance and inclusivity blow-hard, I’m a bit cautious about this. So let’s look at how you could implement this yourself without the need for a specific framework or library.

The typical mechanics for lazy loading

Most approaches follow a pattern like this:

First, some HTML to define our lazily loaded images

<!-- Don't include a src attribute in images you wish to load lazily. Instead specify their src safely in a data attribute --> <img data-src="lighthouse.jpg" alt="A snazzy lighthouse" class="lazy" /> When should image loading take place?

Next, we use some sort of JavaScript magic to correctly set the src attribute when the image comes into view. This was once an expensive JavaScript operation, involving listening for window scroll and resize events, but IntersectionObserver has come to the rescue.

Creating an intersection observer looks like this:

// Set up an intersection observer with some options var observer = new IntersectionObserver(lazyLoad, { // where in relation to the edge of the viewport, we are observing rootMargin: "100px", // how much of the element needs to have intersected // in order to fire our loading function threshold: 1.0 });

We’ve told our new observer to call a function named lazyLoad when those observable conditions are met. The elements that satisfy those conditions will be passed to that function so that we can manipulate them... like actually load and display them.

function lazyLoad(elements) { elements.forEach(image => { if (image.intersectionRatio > 0) { // set the src attribute to trigger a load image.src = image.dataset.src; // stop observing this element. Our work here is done! observer.unobserve(item.target); }; }); };

Great. Our images will have the correct src assigned to them as they come into view, which will cause them to load. But which images? We need to tell the Intersection Observer API which elements we care about. Luckily, we assigned each one with a CSS class of .lazy for just this purpose.

// Tell our observer to observe all img elements with a "lazy" class var lazyImages = document.querySelectorAll('img.lazy'); lazyImages.forEach(img => { observer.observe(img); }); Nice. But perfect?

This seems to be working nicely, but there are some drawbacks to consider:

  1. Until (or unless) JavaScript comes along and successfully runs, we have a bunch of image elements on our page that will not work. We deliberately nixed them by removing the src attribute. That’s the result we wanted, but now we are dependent on JavaScript for these images to load. While it’s true that JavaScript is pretty well ubiquitous on the web theses days — with the web reaching such a broad spectrum of devices and network conditions — JavaScript can become an expensive addition to our performance budgets, particularly if it is involved in the delivery and rendering of content. As Jake Archibald once pointed out, all your users are non-JS while they're downloading your JS. In other words, this is not to be taken lightly.
  2. Even when this works successfully, we have empty elements on our page which might give a bit of a visual jolt when they load in. Perhaps we can hint at the image first and do something fancy. We’ll get to that shortly.

The planned native lazy loading implementation by Chrome should help to address our first point here. If the element has been given a loading attribute, Chrome can honor the src attribute specified at the right time, rather than requesting it eagerly the moment it sees it in the HTML.

The editor's draft of the specification> includes support for different loading behaviors:

  • <img loading="lazy" />: Tell the browser to load this image lazily when needed.
  • <img loading="eager" />: Tell the browser to load this image immediately.
  • <img loading="auto" />: Let the browser make its own assessment.

Browsers without this support would be able to load the image as normal thanks to the resilient nature of HTML and browsers ignoring HTML attributes that they don’t understand.

But... sound the loud caution klaxon! This feature has yet to land in Chrome, and there is also uncertainty about if and when other browsers might choose to implement it. We can use feature detection to decide which method we use, but this still doesn’t give a solid progressive enhancement approach where the images have no dependency on JavaScript.

<img data-src="lighthouse.jpg" alt="A snazzy lighthouse" loading="lazy" class="lazy" /> // If the browser supports lazy loading, we can safely assign the src // attributes without instantly triggering an eager image load. if ("loading" in HTMLImageElement.prototype) { const lazyImages = document.querySelectorAll("img.lazy"); lazyImages.forEach(img => { img.src = img.dataset.src; }); } else { // Use our own lazyLoading with Interection Observers and all that jazz } As a companion to responsive images

Assuming that we are comfortable with the fact that JavaScript is a dependency for the time being, let’s turn our attention to a related topic: responsive images.

If we’re going through the trouble of delivering images into the browser only when needed, it seems fair that we might also want to make sure that we are also delivering them in the best size for how they’ll be displayed. For example, there's no need to download the 1200px-wide version of an image if the device displaying it will only give it a width of 400px. Let's optimize!

HTML gives us a couple of ways to implement responsive images which associate different image sources to different viewport conditions. I like to use the picture element like this:

<picture> <source srcset="massive-lighthouse.jpg" media="(min-width: 1200px)"> <source srcset="medium-lighthouse.jpg" media="(min-width: 700px)"> <source srcset="small-lighthouse.jpg" media="(min-width: 300px)"> <img src="regular-lighthouse.jpg" alt="snazzy lighthouse" /> </picture>

You’ll notice that each source element has a srcset attribute which specifies an image URL, and a media attribute that defines the conditions under which this source should be used. The browser selects the most suitable source from the list according to the media conditions with a standard img element acting as a default/fallback.

Can we combine these two approaches to make lazy-loading responsive images?

Of course, we can! Let’s do it.

Instead of having an empty image until we do our lazy load, I like to load a placeholder image that has a tiny file size. This does incur the overhead of making more HTTP requests, but it also gives a nice effect of hinting at the image before it arrives. You might have seen this effect on Medium or as a result of a site using Gatsby’s lazy loading mechanics.

We can achieve that by initially defining the image sources in our picture element as tiny versions of the same asset and then using CSS to scale them to the same size as their higher-resolution brothers and sisters. Then, through our intersection observer, we can update each of the specified sources to point at the correct image sources.

Our picture element might look like this:

<picture> <source srcset="massive-lighthouse.jpg" media="(min-width: 1200px)"> <source srcset="medium-lighthouse.jpg" media="(min-width: 700px)"> <source srcset="small-lighthouse.jpg" media="(min-width: 300px)"> <img src="regular-lighthouse.jpg" alt="snazzy lighthouse" /> </picture>

No matter what viewport size is applied, we’ll display a tiny 20px image. We're going to blow it up with CSS next.

Previewing the image with style

The browser can scale up the tiny preview image for us with CSS so that it fits the entire picture element rather than a mere 20px of it. Things are going to get a little... pixelated, as you may imagine when a low-resolution image is blown up to larger dimensions.

picture { width: 100%; /* stretch to fit its containing element */ overflow: hidden; } picture img { width: 100%; /* stretch to fill the picture element */ }

For good measure, we can soften that pixelation introduced by scaling up the image by using a blur filter.

picture.lazy img { filter: blur(20px); } Switching sources with JavaScript

With a little adaptation, we can use the same technique as before to set the correct URLs for our srcset and src attributes.

function lazyLoad(elements) { elements.forEach(picture => { if (picture.intersectionRatio > 0) { // gather all the image and source elements in this picture var sources = picture.children; for (var s = 0; s < sources.length; s++) { var source = sources[s]; // set a new srcset on the source elements if (sources.hasAttribute("srcset")) { source.setAttribute("srcset", ONE_OF_OUR_BIGGER_IMAGES); } // or a new src on the img element else { source.setAttribute("src", ONE_OF_OUR_BIGGER_IMAGES); } } // stop observing this element. Our work here is done! observer.unobserve(item.target); }; }); };

One last step to complete the effect: remove that blur effect from the image once the new source has loaded. A JavaScript event listener waiting for the load event on each new image resource can do that for us.

// remove the lazy class when the full image is loaded to unblur source.addEventListener('load', image => { image.target.closest("picture").classList.remove("lazy") }, false);

We can make a nice transition that eases away the blur away, with a sprinkle of CSS.

picture img { ... transition: filter 0.5s, } A little helper from our friends

Great. With just a little JavaScript, a few lines of CSS and a very manageable dollop of HTML, we’ve created a lazy loading technique which also caters for responsive images. So, why aren’t we happy?

Well, we’ve created two bits of friction:

  1. Our markup for adding images is more complex than before. Life used to be simple when all we needed was a single img tag with a good old src attribute.
  2. We’ll also need to create multiple versions of each image assets to populate each viewport size and the pre-loaded state. That’s more work.

Never fear. We can streamline both of these things.

Generating the HTML elements

Let’s look first at generating that HTML rather than authoring it by hand each time.

Whatever tool you use to generate your HTML, chances are that it includes a facility to use includes, functions, shortcodes, or macros. I’m a big fan of using helpers like this. They keep more complex or nuanced code fragments consistent and save time from having to write lengthy code. Most static site generators have this sort of ability.

  • Jekyll lets you create custom Plugins
  • Hugo gives you custom shortcodes
  • Eleventy has shortcodes for all of the template engines it supports
  • There are many more...

As an example, I made a shortcode called lazypicture in my example project built with Eleventy. The shortcode gets used like this:

{% lazypicture lighthouse.jpg "A snazzy lighthouse" %}

To generate the HTML that we need at build time:

<picture class="lazy"> <source srcset="/images/tiny/lighthouse.jpg" media="(min-width: 1200px)"> <source srcset="/images/tiny/lighthouse.jpg" media="(min-width: 700px)"> <source srcset="/images/tiny/lighthouse.jpg" media="(min-width: 300px)"> <img src="/images/tiny/lighthouse.jpg" alt="A snazzy lighthouse" /> </picture> Generating the image assets

The other bit of work we have created for ourselves is generating differently sized image assets. We don’t want to manually create and optimize each and every size of every image. This task is crying out for some automation.

The way you choose to automate this should take into account the number of images assets you need and how regularly you might add more images to that set. You might chose to generate those images as part of each build. Or you could make use of some image transformation services at request time. Let’s look a little at both options.

Option 1: Generating images during your build

Popular utilities exist for this. Whether you run your builds with Grunt, Gulp, webpack, Make, or something else, chances are there is utility for you.

The example below is using gulp-image-resize in a Gulp task as part of a Gulp build process. It can chomp through a directory full of image assets and generate the variants you need. It has a bunch of options for you to control, and you can combine with other Gulp utilities to do things like name the different variants according to the conventions you choose.

var gulp = require('gulp'); var imageResize = require('gulp-image-resize'); gulp.task('default', function () { gulp.src('src/**/*.{jpg,png}') .pipe(imageResize({ width: 100, height: 100 })) .pipe(gulp.dest('dist')); });

The CSS-Tricks site uses a similar approach (thanks to the custom sizes feature in WordPress) to auto-generate all of its different image sizes. (Oh yeah! CSS-Tricks walks the walk!) ResponsiveBreakpoints.com provides a web UI to experiment with different settings and options for creating images sets and even generates the code for you.

Or, you can use it programmatically as Chris mentioned on Twitter.

I use this on @CSS, but programmatically!

Every image uploaded is ran through this to find perfect breakpoints, then uploaded to @cloudinary and served from there (originals on our server).https://t.co/Gt75vujTVL

— Chris Coyier (@chriscoyier) June 10, 2019

When you have as many image files as CSS-Tricks, though, doing this work as part of a build step can become cumbersome. Good caching in your build and other file management tasks can help, but it can be easy to end up with a lengthy build process that heats up your computer as it performs all of the work.

An alternative is to transform these resources at request time rather than during a build step. That's the second option.

Option 2: On-demand image transformations

I’m a loud advocate of pre-rendering content. I’ve shouted about this approach (often referred to as JAMstack) for quite some time, and I believe that it has numerous performance, security and simplicity benefits. (Chris summed this up nicely in a post about static hosting and JAMstack.)

That said, the idea of generating different image sizes at request time might seem to be contrary to my lazy loading objectives. In fact, there are a number of services and companies now, who specialize in this and they do it in a very powerful and convenient way.

Combining image transformations with powerful CDN and asset caching capabilities by companies like Netlify, Fastly, and Cloudinary can rapidly generate images with the dimensions you pass to them via a URL. Each service has significant processing power to perform these transformations on the fly, then cache the generated images for future use. This makes for seamless rendering for subsequent requests.

Since I work at Netlify, I’ll illustrate this with an example using Netlify’s service. But the others I mentioned work in similar ways.

Netlify’s Image Transformation service builds on top of something called Netlify Large Media. This is a feature created to help manage large assets in your version control. Git is not very good at this by default, but Git Large File Storage can extend Git to make it possible to include large assets in your repos without clogging them up and making them unmanageable.

You can read more on the background of that approach for managing large assets if you are interested.

Placing images under version control in our Git repositories is an added bonus, but for our purposes, we are more interested in enjoying the benefits of making on-the-fly transformations of those images.

Netlify looks for querystring parameters when transforming images. You can specify the height, width and the type of crop you’d like to perform. Like this:

  • A raw image with no transformations:
    /images/apple3.jpg
  • An image resized to be 300px wide:
    /images/apple3.jpg?nf_resize=fit&w=300
  • An image cropped to be 500px by 500px with automated focal point detection:
    /images/apple3.jpg?nf_resize= smartcrop&w=500&h=500

Knowing that we can create and deliver any image sizes from a single source image in our version control means that the JavaScript we use to update the image sources only need to include the size parameters we choose.

The approach can drastically speed up your site build processes because the work is now outsourced and not performed at build time.

Wrapping it all up

We’ve covered a lot of ground here. There are a lot of very achievable options for implementing responsive images with lazy loading. Hopefully, this will give enough info to make you think twice about reaching for the nearest available framework to gain access to this sort of functionality.

This demo site pulls together a number of these concepts and uses Netlify’s image transformation service.

Demo Site

GitHub Repo One last time, to summarize the flow

  • A static site generator with a shortcode eases the task of creating the picture elements
  • Netlify Large Media hosts and transforms the images, then serves them as tiny 20px-wide versions before the larger files are loaded as needed.
  • CSS scales up the tiny images and blurs them to create the preview placeholder images.
  • The Intersection Observer API detects when to swap the image assets for the appropriate larger versions.
  • JavaScript detects the load event for the larger images and removes out the blur effect to reveal the higher-resolution rendering.

The post Tips for rolling your own lazy loading appeared first on CSS-Tricks.

Making width and flexible items play nice together

Css Tricks - Mon, 07/01/2019 - 4:15am

The short answer: flex-shrink and flex-basis are probably what you’re lookin’ for.

The long answer

Let’s say you want to align an image and some text next to each other with like this:

Now let's say you reach for flexbox to make it happen. Setting the parent element to display: flex; is a good first start.

.container { display: flex; }

And this results in...

See the Pen
Flex-Shrink Example 1
by Robin Rendle (@robinrendle)
on CodePen.

Yikes! Well, that's kinda okay, I guess. It makes sense that the image would bump right up against the text like that because we haven’t set a width on the image. Ideally, though, we’d like that image to have a fixed width and then the text should take up whatever space is left over.

Okay, so let’s go do that!

.container { display: flex; } img { width: 50px; margin-right: 20px; }

See the Pen
Flex-Shrink Example 2
by Robin Rendle (@robinrendle)
on CodePen.

This looks great in Chrome. But wait, what? If we inspect the image tag in Firefox DevTools, we’ll find that it’s not the width value that we set at all:

We could use min-width to force the image to the 50px width we want:

img { min-width: 50px; margin-right: 20px; }

Buuuuuuut, that only sets helps with the width so we've got to put a margin in as well.

img { min-width: 50px; margin-right: 20px; }

There we go. That's better in Firefox and still works in Chrome.

The even longer answer

I realized the image is getting the squished treatment because we need to use the flex-shrink property to tell flex items not to decrease in size, regardless of whether or not they have a width.

All flex-items have a flex-shrink value of 1. We need to set the image element to 0:

.container { display: flex; } img { width: 50px; margin-right: 20px; flex-shrink: 0; }

See the Pen
Flex-Shrink Example 3
by Robin Rendle (@robinrendle)
on CodePen.

Getting better! But we can still do more to improve this.

The director's cut answer

We can tidy things up further because flex-shrink is included in the flex shorthand property.

flex: none | [ <'flex-grow'> <'flex-shrink'>? || <'flex-basis'> ]

If we set the flex-shrink value to 0 and the flex-basis value to the default width we want the image to be, then we can get rid of the width property altogether.

.container { display: flex; } img { flex: 0 0 50px; margin-right: 20px; }

Oh yeah:

See the Pen
Flex-Shrink Example 2
by Geoff Graham (@geoffgraham)
on CodePen.

Another example

That flex-shrink property solves a ton of other problems and is pretty dang important if you want to start using flexbox. Here’s another example why: I stumbled upon yet another problem like the one above and I mentioned it in a recent edition of the newsletter. I was building a navigation component that would let users scroll left and right through multiple items. I noticed the following problem when checking my work:

See the Pen
flex-shrink nav item 1
by Robin Rendle (@robinrendle)
on CodePen.

That longer navigation item shouldn’t break into multiple lines like that — but I finally understood why this was happening, thanks to the previous issue. If you set the flex-shrink property to 0 then it will tell each item in this navigation not to shrink and instead assume the width of the content instead, like this:

See the Pen
flex-shrink nav item
by Robin Rendle (@robinrendle)
on CodePen.

And, yes, we can go the extra step once again to use the flex property instead, this time using auto as the flex-basis since we want the maximum amount of space for all items to be considered when divvying up space in the navigation container.

See the Pen
Setting flex for flexible nav elements
by Geoff Graham (@geoffgraham)
on CodePen.

Huzzah! We figured it out. Even though the answer is a single line of code, it's is pretty essential one to making truly flexible elements.

The post Making width and flexible items play nice together appeared first on CSS-Tricks.

The (Developer’s) Growth Model

Css Tricks - Fri, 06/28/2019 - 1:25pm

I really like the post "The Designer’s Growth Model" by Dennis Hambeukers. Dennis just invented this model, but it's based on some existing ideas and it all rings true for me. Let me try to summarize the five stages as he lays them out for designers.

  1. Producers: You learn how to design. You learn fundamentals, you practice, you get good at doing design work and producing beautiful functional things. Then you have this "crisis" moment before the next stage where you find you can't do enough work on your own and that you need to be able to scale your efforts, with multiple human beings and working on systems — and that's an entirely new skill.
  2. Architects: Now that you've succeeded in scaling through team building and systems thinking, the next crisis moment is that that this the work still might be isolated, and too focused on internal thinking. To grow, you'll need to work with people outside the design bubble, and understand problems more holistically.
  3. Connectors: Now that you've succeeded in being more collaborative across an entire organization and being a real problem solver, the next crisis moment is when everything becomes organizationally complicated. Just delivering products isn't enough, because you're involved deeply across the organization and you're responsible for the success of what is delivered.
  4. Scientists: Now, you measure everything. You know what works and what doesn't because you test it and can prove it, along with using all the skills you've honed along the way. Your next crisis is figuring out how to translate your work into actual change.
  5. Visionaries: You're a leader now. You have an understanding of how the whole organization ticks, and you are a force for change.
From The Designer's Growth Model

I think this can applies just as well to web development, with very little change. I can relate in many ways. I started plucking away at building sites alone. I found more success and was able to build bigger things by working with other people. At some point, it was clear to me that things don't revolve around development. Development is merely one part of a car that doesn't drive at all without many other parts. Even today, it's clearer to me that I can be more effective and drive more positive change the more I know about all of the parts.

Not that I've completed my journey. If I had to map myself directly onto this model, I'm probably barely on step three — But a model is just a model. It's not meant to be a perfect roadmap for everybody. Your own career path will be twistier than this. You might even experience bits from all the levels in different doses along the way.

The post The (Developer’s) Growth Model appeared first on CSS-Tricks.

The (Developer’s) Growth Model

Css Tricks - Fri, 06/28/2019 - 1:25pm

I really like the post "The Designer’s Growth Model" by Dennis Hambeukers. Dennis just invented this model, but it's based on some existing ideas and it all rings true for me. Let me try to summarize the five stages as he lays them out for designers.

  1. Producers: You learn how to design. You learn fundamentals, you practice, you get good at doing design work and producing beautiful functional things. Then you have this "crisis" moment before the next stage where you find you can't do enough work on your own and that you need to be able to scale your efforts, with multiple human beings and working on systems — and that's an entirely new skill.
  2. Architects: Now that you've succeeded in scaling through team building and systems thinking, the next crisis moment is that that this the work still might be isolated, and too focused on internal thinking. To grow, you'll need to work with people outside the design bubble, and understand problems more holistically.
  3. Connectors: Now that you've succeeded in being more collaborative across an entire organization and being a real problem solver, the next crisis moment is when everything becomes organizationally complicated. Just delivering products isn't enough, because you're involved deeply across the organization and you're responsible for the success of what is delivered.
  4. Scientists: Now, you measure everything. You know what works and what doesn't because you test it and can prove it, along with using all the skills you've honed along the way. Your next crisis is figuring out how to translate your work into actual change.
  5. Visionaries: You're a leader now. You have an understanding of how the whole organization ticks, and you are a force for change.
From The Designer's Growth Model

I think this can applies just as well to web development, with very little change. I can relate in many ways. I started plucking away at building sites alone. I found more success and was able to build bigger things by working with other people. At some point, it was clear to me that things don't revolve around development. Development is merely one part of a car that doesn't drive at all without many other parts. Even today, it's clearer to me that I can be more effective and drive more positive change the more I know about all of the parts.

Not that I've completed my journey. If I had to map myself directly onto this model, I'm probably barely on step three — But a model is just a model. It's not meant to be a perfect roadmap for everybody. Your own career path will be twistier than this. You might even experience bits from all the levels in different doses along the way.

The post The (Developer’s) Growth Model appeared first on CSS-Tricks.

Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies

Css Tricks - Fri, 06/28/2019 - 7:33am

Šime posts regular content for web developers on webplatform.news.

In this week's news, Wikipedia helps identify three slow click handlers, Google Earth comes to the web, SVG properties in CSS get more support, and what to do in the event of zombie cookies.

Tracking down slow event handlers with Event Timing

Event Timing is experimentally available in Chrome (as an Origin Trial) and Wikipedia is taking part in the trial. This API can be used to accurately determine the duration of event handlers with the goal of surfacing slow events.

We quickly identified 3 very frequent slow click handlers experienced frequently by real users on Wikipedia. [...] Two of those issues are caused by expensive JavaScript calls causing style recalculation and layout.

(via Gilles Dubuc)

Google Earth for Web beta available

The preview version of Google Earth for Web (powered by WebAssembly) is now available. You can try it out in Chromium-based browsers and Firefox?—?it runs single-threaded in browsers that don’t yet have (re-)enabled SharedArrayBuffer?—?but not in Safari because of its lack of full support for WebGL2.

(via Jordon Mears)

SVG geometry properties in CSS

Firefox Nightly has implemented SVG geometry properties (x, y, r, etc.) in CSS. This feature is already supported in Chrome and Safari and is expected to ship in Firefox 69 in September.

See the Pen
Animating SVG geometry properties with CSS
by Šime Vidas (@simevidas)
on CodePen.

(via Jérémie Patonnier)

Browsers can keep session cookies alive

Chrome and Firefox allow users to restore the previous browser session on startup. With this option enabled, closing the browser will not delete the user’s session cookies, nor empty the sessionStorage of web pages.

Given this session resumption behavior, it’s more important than ever to ensure that your site behaves reasonably upon receipt of an outdated session cookie (e.g. redirect the user to the login page instead of showing an error).

(via Eric Lawrence)

The post Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies appeared first on CSS-Tricks.

Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies

Css Tricks - Fri, 06/28/2019 - 7:33am

Šime posts regular content for web developers on webplatform.news.

In this week's news, Wikipedia helps identify three slow click handlers, Google Earth comes to the web, SVG properties in CSS get more support, and what to do in the event of zombie cookies.

Tracking down slow event handlers with Event Timing

Event Timing is experimentally available in Chrome (as an Origin Trial) and Wikipedia is taking part in the trial. This API can be used to accurately determine the duration of event handlers with the goal of surfacing slow events.

We quickly identified 3 very frequent slow click handlers experienced frequently by real users on Wikipedia. [...] Two of those issues are caused by expensive JavaScript calls causing style recalculation and layout.

(via Gilles Dubuc)

Google Earth for Web beta available

The preview version of Google Earth for Web (powered by WebAssembly) is now available. You can try it out in Chromium-based browsers and Firefox?—?it runs single-threaded in browsers that don’t yet have (re-)enabled SharedArrayBuffer?—?but not in Safari because of its lack of full support for WebGL2.

(via Jordon Mears)

SVG geometry properties in CSS

Firefox Nightly has implemented SVG geometry properties (x, y, r, etc.) in CSS. This feature is already supported in Chrome and Safari and is expected to ship in Firefox 69 in September.

See the Pen
Animating SVG geometry properties with CSS
by Šime Vidas (@simevidas)
on CodePen.

(via Jérémie Patonnier)

Browsers can keep session cookies alive

Chrome and Firefox allow users to restore the previous browser session on startup. With this option enabled, closing the browser will not delete the user’s session cookies, nor empty the sessionStorage of web pages.

Given this session resumption behavior, it’s more important than ever to ensure that your site behaves reasonably upon receipt of an outdated session cookie (e.g. redirect the user to the login page instead of showing an error).

(via Eric Lawrence)

The post Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies appeared first on CSS-Tricks.

Nownownow

Css Tricks - Fri, 06/28/2019 - 7:32am

Matthias Ott, relaying an idea he heard from Derek Sivers:

Many personal websites, including this one here, have an “about” page. It’s a page that tells you something about the background of a person or about the services provided. But what this page often doesn’t answer – and neither do Twitter or Facebook pages – is what this person really is up to at the moment. A page that answers questions like: What are you focused on at this point in your life? What have you just started working on that excites you like nothing else? Did you just move to a new town? Did you start a new career as a Jengascript wrangler? To answer all those questions, Derek suggests to create a “now page”. A page that tells visitors of your site “what you’d tell a friend you hadn’t seen in a year.”

Very cool idea! Derek has a directory page of people who have done this.

I have more scattered thoughts:

  • It's funny how social media sites aren't very helpful with this. You'd think looking at someone's social media profile would be the quickest and easiest way to catch up with what they are doing right now, but it just ain't. That's true for me, too. Random statements of what you're working on don't make very good social media posts. Maybe a pinned tweet could be like a "now" page, though.
  • I wonder if more homepages on people's personal sites should be this. As I browse around some of the sites, I like a lot of the "now" pages more than I like the homepage.
  • I went with a "what I want you to do" section on my personal site. It's a different vibe, but it almost doubles as a "now" page, as the things I want you to do are fairly related to the things I'm doing. Maybe the idea of a do page has some legs.

Direct Link to ArticlePermalink

Nownownow

Css Tricks - Fri, 06/28/2019 - 7:32am

Matthias Ott, relaying an idea he heard from Derek Sivers:

Many personal websites, including this one here, have an “about” page. It’s a page that tells you something about the background of a person or about the services provided. But what this page often doesn’t answer – and neither do Twitter or Facebook pages – is what this person really is up to at the moment. A page that answers questions like: What are you focused on at this point in your life? What have you just started working on that excites you like nothing else? Did you just move to a new town? Did you start a new career as a Jengascript wrangler? To answer all those questions, Derek suggests to create a “now page”. A page that tells visitors of your site “what you’d tell a friend you hadn’t seen in a year.”

Very cool idea! Derek has a directory page of people who have done this.

I have more scattered thoughts:

  • It's funny how social media sites aren't very helpful with this. You'd think looking at someone's social media profile would be the quickest and easiest way to catch up with what they are doing right now, but it just ain't. That's true for me, too. Random statements of what you're working on don't make very good social media posts. Maybe a pinned tweet could be like a "now" page, though.
  • I wonder if more homepages on people's personal sites should be this. As I browse around some of the sites, I like a lot of the "now" pages more than I like the homepage.
  • I went with a "what I want you to do" section on my personal site. It's a different vibe, but it almost doubles as a "now" page, as the things I want you to do are fairly related to the things I'm doing. Maybe the idea of a do page has some legs.

Direct Link to ArticlePermalink

The post Nownownow appeared first on CSS-Tricks.

Which CSS IS AWESOME makes the most sense if you don’t know CSS well?

Css Tricks - Thu, 06/27/2019 - 11:29am

Peter-Paul posted this question:

Which of the examples in the image do you consider correct?

If you know CSS well, don't reply, just retweet.

If you don't know CSS too well, please reply to the poll in the next tweet. pic.twitter.com/4bgnf9Wdkc

— ppk &#x1f1ea;&#x1f1fa; (@ppk) June 17, 2019

Note the interesting caveat: only vote in the poll if you don't know CSS well.

The winning answer was D! You gotta wonder if the result would have been different if the request for non-CSS experts wasn't there.

I like to think I know CSS OK, so I didn't vote. My brain goes like this:

  1. I think he's asking "by default," so the answer may assume there's no other CSS doing anything to that text.
  2. I wish I knew why the box was that particular width, but I guess I'll just assume it's a set width.
  3. It's not B because ellipsis stuff requires extra stuff, and doesn't work on multiple lines like that — unless we're talking line clamping, which is even weirder.
  4. It's not C because that requires hiding overflow which is never really a default — that is, except off the top and left of the browser window, I guess. Or in an iframe.
  5. It's not D because words just don't break like that unless you do pretty specific stuff.
  6. A actually makes decent sense. It's weird to look at, but I've been dealing with stuff busting out of containers my whole career. C'est la vie.

Remember, we've done a deep dive into CSS IS AWESOME before and how it interestingly captures the weirdness of CSS.

Which CSS IS AWESOME makes the most sense if you don’t know CSS well?

Css Tricks - Thu, 06/27/2019 - 11:29am

Peter-Paul posted this question:

Which of the examples in the image do you consider correct?

If you know CSS well, don't reply, just retweet.

If you don't know CSS too well, please reply to the poll in the next tweet. pic.twitter.com/4bgnf9Wdkc

— ppk &#x1f1ea;&#x1f1fa; (@ppk) June 17, 2019

Note the interesting caveat: only vote in the poll if you don't know CSS well.

The winning answer was D! You gotta wonder if the result would have been different if the request for non-CSS experts wasn't there.

I like to think I know CSS OK, so I didn't vote. My brain goes like this:

  1. I think he's asking "by default," so the answer may assume there's no other CSS doing anything to that text.
  2. I wish I knew why the box was that particular width, but I guess I'll just assume it's a set width.
  3. It's not B because ellipsis stuff requires extra stuff, and doesn't work on multiple lines like that — unless we're talking line clamping, which is even weirder.
  4. It's not C because that requires hiding overflow which is never really a default — that is, except off the top and left of the browser window, I guess. Or in an iframe.
  5. It's not D because words just don't break like that unless you do pretty specific stuff.
  6. A actually makes decent sense. It's weird to look at, but I've been dealing with stuff busting out of containers my whole career. C'est la vie.

Remember, we've done a deep dive into CSS IS AWESOME before and how it interestingly captures the weirdness of CSS.

The post Which CSS IS AWESOME makes the most sense if you don’t know CSS well? appeared first on CSS-Tricks.

Different Approaches for Creating a Staggered Animation

Css Tricks - Thu, 06/27/2019 - 5:00am

Animating elements, at its most basic, is fairly straightforward. Define the keyframes. Name the animation. Call it on an element.

But sometimes we need something a little more complex to get the right “feel" for the way things move. For example, a sound equalizer might use the same animation on each bar, but they are staggered to give the illusion of being animated independently.

See the Pen
Apple Music Sound Equilizer in SVG
by Geoff Graham (@geoffgraham)
on CodePen.

I was recently building a dashboard and wanted the items in one of the widgets to flow into view with a staggered animation.


Just like the sound equalizer above, I started going down the :nth-child route. I used the unordered list (<ul>) as the parent container, gave it a class and employed the :nth-child pseudo selector to offset each list item with animaton-delay.

.my-list li { animation: my-animation 300ms ease-out; } .my-list li:nth-child(1) { animation-delay: 100ms; } .my-list li:nth-child(2) { animation-delay: 200ms; } .my-list li:nth-child(3) { animation-delay: 300ms; } /* and so on */

This technique does indeed stagger items well, particularly if you know how many items are going to be in the list at any given time. Where things fall apart, however, is when the number of items is unpredictable, which was the case for the widget I was building for the dashboard. I really didn’t want to come back to this piece of code every time the number of items in the list changed, so I knocked out a quick Sass loop that accounts for up to 50 items and increments the animation delay with each item:

.my-list { li { animation: my-animation 300ms ease-out; @for $i from 1 through 50 { &:nth-child(#{$i}) { animation-delay: 100ms * $i; } } } }

That should do it! Yet, it feels way too hacky. Sure, it doesn’t add that much weight to the file, but you know the compiled CSS will include a bunch of unused selectors, like nth-child(45).

There must be a better way. This is where I would normally reach for JavaScript to find all of the items and add a delay but… this time I spent a little time exploring to see if there is a way to do it with CSS alone.

How about CSS counters?

The first thing I thought of was using a CSS counter in combination with the calc() function:

.my-list { counter-reset: my-counter; } .my-list li { counter-increment: my-counter; animation-delay: calc(counter(my-counter) * 100ms); }

Unfortunately, that won’t work because the spec says counters cannot be used in calc()):

Components of a calc() expression can be literal values or attr() or calc() expressions.

Turns out a few people like this idea, but it hasn’t gone further than the draft stage.

How about a data attribute?

Having read that excerpt from the spec, I learned that calc() can use attr(). And, according to the CSS Values and Units specification):

In CSS3, the attr() expression can return many different types

This made me think; perhaps a data attribute could do the trick.

<ul class="my-list"> <li data-count="1"></li> <li data-count="2"></li> <li data-count="3"></li> <li data-count="4"></li> </ul> .my-list li { animation-delay: calc(attr(data-count) * 150ms); }

But my hopes were dashed as the browser support for this is diabolical!

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafariNoNoNoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNoNoNoNoNoNo

So, back to the drawing board.

How about custom properties?

The next idea I had was using CSS custom properties. It’s not pretty, but it worked &#x1f642;

See the Pen
CSS variables animation order
by Dan Benmore (@dbenmore)
on CodePen.

Turns out it’s pretty flexible too. For example, the animation can be reversed:

See the Pen
CSS variables reverse animation order
by Dan Benmore (@dbenmore)
on CodePen.

It can also do something completely random and animate elements at the same time:

See the Pen
CSS variables random animation order
by Dan Benmore (@dbenmore)
on CodePen.

We can even push it a bit further and do a diagonal swoosh:

See the Pen
Set animation stagger with CSS properties / variables
by Dan Benmore (@dbenmore)
on CodePen.

The browser support isn’t all that bad (pokes stick at Internet Explorer).

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari493631No169.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox9.346No677567

One of the great features of CSS is that it will ignore things it doesn’t understand, thanks to the cascade. That means everything will animate in into view together. If that’s not your bag, you can add a feature query to override a default animation:

.my-list li { animation: fallback-animation; } @supports (--variables) { .my-list li { animation: fancy-animation; animation-delay: calc(var(--animation-order) * 100ms); } } Vanilla CSS FTW

The more I stop and ask myself whether I need JavaScript, the more I’m amazed what CSS can do on its own. Sure, it would be nice if CSS counters could be used in a calc() function and it would be a pretty elegant solution. But for now, inline custom properties provide both a powerful and flexible way to solve this problem.

Different Approaches for Creating a Staggered Animation

Css Tricks - Thu, 06/27/2019 - 5:00am

Animating elements, at its most basic, is fairly straightforward. Define the keyframes. Name the animation. Call it on an element.

But sometimes we need something a little more complex to get the right “feel" for the way things move. For example, a sound equalizer might use the same animation on each bar, but they are staggered to give the illusion of being animated independently.

See the Pen
Apple Music Sound Equilizer in SVG
by Geoff Graham (@geoffgraham)
on CodePen.

I was recently building a dashboard and wanted the items in one of the widgets to flow into view with a staggered animation.


Just like the sound equalizer above, I started going down the :nth-child route. I used the unordered list (<ul>) as the parent container, gave it a class and employed the :nth-child pseudo selector to offset each list item with animaton-delay.

.my-list li { animation: my-animation 300ms ease-out; } .my-list li:nth-child(1) { animation-delay: 100ms; } .my-list li:nth-child(2) { animation-delay: 200ms; } .my-list li:nth-child(3) { animation-delay: 300ms; } /* and so on */

This technique does indeed stagger items well, particularly if you know how many items are going to be in the list at any given time. Where things fall apart, however, is when the number of items is unpredictable, which was the case for the widget I was building for the dashboard. I really didn’t want to come back to this piece of code every time the number of items in the list changed, so I knocked out a quick Sass loop that accounts for up to 50 items and increments the animation delay with each item:

.my-list { li { animation: my-animation 300ms ease-out; @for $i from 1 through 50 { &:nth-child(#{$i}) { animation-delay: 100ms * $i; } } } }

That should do it! Yet, it feels way too hacky. Sure, it doesn’t add that much weight to the file, but you know the compiled CSS will include a bunch of unused selectors, like nth-child(45).

There must be a better way. This is where I would normally reach for JavaScript to find all of the items and add a delay but… this time I spent a little time exploring to see if there is a way to do it with CSS alone.

How about CSS counters?

The first thing I thought of was using a CSS counter in combination with the calc() function:

.my-list { counter-reset: my-counter; } .my-list li { counter-increment: my-counter; animation-delay: calc(counter(my-counter) * 100ms); }

Unfortunately, that won’t work because the spec says counters cannot be used in calc()):

Components of a calc() expression can be literal values or attr() or calc() expressions.

Turns out a few people like this idea, but it hasn’t gone further than the draft stage.

How about a data attribute?

Having read that excerpt from the spec, I learned that calc() can use attr(). And, according to the CSS Values and Units specification):

In CSS3, the attr() expression can return many different types

This made me think; perhaps a data attribute could do the trick.

<ul class="my-list"> <li data-count="1"></li> <li data-count="2"></li> <li data-count="3"></li> <li data-count="4"></li> </ul> .my-list li { animation-delay: calc(attr(data-count) * 150ms); }

But my hopes were dashed as the browser support for this is diabolical!

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafariNoNoNoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNoNoNoNoNoNo

So, back to the drawing board.

How about custom properties?

The next idea I had was using CSS custom properties. It’s not pretty, but it worked &#x1f642;

See the Pen
CSS variables animation order
by Dan Benmore (@dbenmore)
on CodePen.

Turns out it’s pretty flexible too. For example, the animation can be reversed:

See the Pen
CSS variables reverse animation order
by Dan Benmore (@dbenmore)
on CodePen.

It can also do something completely random and animate elements at the same time:

See the Pen
CSS variables random animation order
by Dan Benmore (@dbenmore)
on CodePen.

We can even push it a bit further and do a diagonal swoosh:

See the Pen
Set animation stagger with CSS properties / variables
by Dan Benmore (@dbenmore)
on CodePen.

The browser support isn’t all that bad (pokes stick at Internet Explorer).

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari493631No169.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox9.346No677567

One of the great features of CSS is that it will ignore things it doesn’t understand, thanks to the cascade. That means everything will animate in into view together. If that’s not your bag, you can add a feature query to override a default animation:

.my-list li { animation: fallback-animation; } @supports (--variables) { .my-list li { animation: fancy-animation; animation-delay: calc(var(--animation-order) * 100ms); } } Vanilla CSS FTW

The more I stop and ask myself whether I need JavaScript, the more I’m amazed what CSS can do on its own. Sure, it would be nice if CSS counters could be used in a calc() function and it would be a pretty elegant solution. But for now, inline custom properties provide both a powerful and flexible way to solve this problem.

The post Different Approaches for Creating a Staggered Animation appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.