Developer News

Build a Chat App Using React Hooks in 100 Lines of Code

Css Tricks - Mon, 07/15/2019 - 4:52am

We’ve looked at React Hooks before, around here at CSS-Tricks. I have an article that introduces them as well that illustrates how to use them to create components through functions. Both articles are good high-level overviews about the way they work, but they open up a lot of possibilities, too.

So, that’s what we’re going to do in this article. We’re going to see how hooks make our development process easier and faster by building a chat application.

Specifically, we are building a chat application using Create React App. While doing so, we will be using a selection of React Hooks to simplify the development process and to remove a lot of boilerplate code that’s unnecessary for the work.

There are several open source Reacts hooks available and we’ll be putting those to use as well. These hooks can be directly consumed to build features that otherwise would have taken more of code to create. They also generally follow well-recognized standards for any functionality. In effect, this increases the efficiency of writing code and provides secure functionalities.

Let’s look at the requirements

The chat application we are going to build will have the following features:

  • Get a list of past messages sent from the server
  • Connect to a room for group chatting
  • Get updates when people disconnect from or connect to a room
  • Send and receive messages

We’re working with a few assumptions as we dive in:

  • We’ll consider the server we are going to use as a blackbox. Don't worry about it working perfectly as we're going to communicate with it using simple sockets.
  • All the styles are contained in a single CSS file, can be copied to the src directory. All the styles used within the app are linked in the repository.
Getting set up for work

OK, we’re going to want to get our development environment ready to start writing code. First off, React requires both Node and npm. You can set them up here.

Let’s spin up a new project from the Terminal:

npx create-react-app socket-client cd socket-client npm start

Now we should be able to navigate to http://localhost:3000 in the browser and get the default welcome page for the project.

From here, we’re going to break the work down by the hooks we’re using. This should help us understand the hooks as we put them into practical use.

Using the setState hook

The first hook we're going to use is useState. It allows us to maintain state within our component as opposed to, say, having to write and initialize a class using this.state. Data that remains constant, such as username, is stored in useState variables. This ensures the data remains easily available while requiring a lot less code to write.

The main advantage of useState is that it's automatically reflected in the rendered component whenever we update the state of the app. If we were to use regular variables, they wouldn’t be considered as the state of the component and would have to be passed as props to re-render the component. So, again, we’re cutting out a lot of work and streamlining things in the process.

The hook is built right into React, so we can import it with a single line:

import React, { useState } from 'react';

We are going to create a simple component that returns "Hello" if the user is already logged in or a login form if the user is logged out. We check the id variable for that.

Our form submissions will be handled by a function we’re creating called handleSubmit. It will check if the Name form field is completed. If it is, we will set the id and room values for that user. Otherwise, we’ll throw in a message reminding the user that the Name field is required in order to proceed.

// App.js import React, { useState } from 'react'; import './index.css'; export default () => { const [room, setRoom] = useState(''); const [id, setId] = useState(''); const handleSubmit = e => { e.preventDefault(); const name = document.querySelector('#name').value.trim(); const room_value = document.querySelector('#room').value.trim(); if (!name) { return alert("Name can't be empty"); } setId(name); setRoom(document.querySelector('#room').value.trim()); }; return id !== '' ? ( <div>Hello</div> ) : ( <div style={{ textAlign: 'center', margin: '30vh auto', width: '70%' }}> <form onSubmit={event => handleSubmit(event)}> <input id="name" required placeholder="What is your name .." /><br /> <input id="room" placeholder="What is your room .." /><br /> <button type="submit">Submit</button> </form> </div> ); };

That’s how we’re using the useState hook in our chat application. Again, we’re importing the hook from React, constructing values for the user’s ID and chat room location, setting those values if the user’s state is logged in, and returning a login form if the user is logged out.

Using the useSocket hook

We're going to use an open source hook called useSocket to maintain a connection to our server. Unlike useState, this hook is not baked into React, so we’re going to have to add it to our project before importing it into the app.

npm add use-socket.io-client

The server connection is maintained by using the React Hooks version of the socket.io library, which is an easier way of maintaining websocket connections with a server. We are using it for sending and receiving real-time messages as well as maintaining events, like connecting to a room.

The default socket.io client library has global declarations, i.e., the socket variable we define can be used by any component. However, our data can be manipulated from anywhere and we won't know where those changes are happening. Socket hooks counter this by constraining hook definitions at the component level, meaning each component is responsible for its own data transfer.

The basic usage for useSocket looks like this:

const [socket] = useSocket('socket-url')

We’re going to be using a few socket APIs as we move ahead. For the sake of reference, all of them are outlined in the socket.io documentation. But for now, let’s import the hook since we’ve already installed it.

import useSocket from 'use-socket.io-client';

Next, we’ve got to initialize the hook by connecting to our server. Then we’ll log the socket in the console to check if it is properly connected.

const [id, setId] = useState(''); const [socket] = useSocket('<https://open-chat-naostsaecf.now.sh>'); socket.connect(); console.log(socket);

Open the browser console and the URL in that snippet should be logged.

Using the useImmer hook

Our chat app will make use of the useImmer hook to manage state of arrays and objects without mutating the original state. It combines useState and Immer to give immutable state management. This will be handy for managing lists of people who are online and messages that need to be displayed.

Using Immer with useState allows us to change an array or object by creating a new state from the current state while preventing mutations directly on the current state. This offers us more safety as far as leaving the current state intact while being able to manipulate state based on different conditions.

Again, we’re working with a hook that’s not built into React, so let’s import it into the project:

npm add use-immer

The basic usage is pretty straightforward. The first value in the constructor is the current state and the second value is the function that updates that state. The useImmer hook then takes the starting values for the current state.

const [data, setData] = useImmer(default_value) Using the setData hook

Notice the setData hook in that last example? We’re using that to make a draft copy of the current data we can use to manipulate the data safely and use it as the next state when changes become immutable. Thus, our original data is preserved until we’re done running our functions and we’re absolutely clear to update the current data.

setData(draftState => { draftState.operation(); }); // ...or setData(draft => newState); // Here, draftState is a copy of the current data Using the useEffect hook

Alright, we’re back to a hook that’s built right into React. We’re going to use the useEffect hook to run a piece of code only when the application loads. This ensures that our code only runs once rather than every time the component re-renders with new data, which is good for performance.

All we need to do to start using the hook is to import it — no installation needed!

import React, { useState, useEffect } from 'react';

We will need a component that renders a message or an update based on the presence or absence of a sende ID in the array. Being the creative people we are, let’s call that component Messages.

const Messages = props => props.data.map(m => m[0] !== '' ? (<li key={m[0]}><strong>{m[0]}</strong> : <div className="innermsg">{m[1]}</div></li>) : (<li key={m[1]} className="update">{m[1]}</li>) );

Let’s put our socket logic inside useEffect so that we don't duplicate the same set of messages repeatedly when a component re-renders. We will define our message hook in the component, connect to the socket, then set up listeners for new messages and updates in the useEffect hook itself. We will also set up update functions inside the listeners.

const [socket] = useSocket('<https://open-chat-naostsaecf.now.sh>'); socket.connect(); const [messages, setMessages] = useImmer([]); useEffect(()=>{ socket.on('update', message => setMessages(draft => { draft.push(['', message]); })); socket.on('message que',(nick, message) => { setMessages(draft => { draft.push([nick, message]) }) }); },0);

Another touch we’ll throw in for good measure is a "join" message if the username and room name are correct. This triggers the rest of the event listeners and we can receive past messages sent in that room along with any updates required.

// ... setRoom(document.querySelector('#room').value.trim()); socket.emit('join', name, room); }; return id ? ( <section style={{display:'flex',flexDirection:'row'}} > <ul id="messages"><Messages data={messages}></Messages></ul> <ul id="online"> &#x1f310; :</ul> <div id="sendform"> <form id="messageform" style={{display: 'flex'}}> <input id="m" /><button type="submit">Send Message</button> </form> </div> </section> ) : ( // ... The finishing touches

We only have a few more tweaks to wrap up our chat app. Specifically, we still need:

  • A component to display people who are online
  • A useImmer hook for it with a socket listener
  • A message submission handler with appropriate sockets

All of this builds off of what we’ve already covered so far. I’m going to drop in the full code for the App.js file to show how everything fits together.

// App.js import React, { useState, useEffect } from 'react'; import useSocket from 'use-socket.io-client'; import { useImmer } from 'use-immer'; import './index.css'; const Messages = props => props.data.map(m => m[0] !== '' ? (<li><strong>{m[0]}</strong> : <div className="innermsg">{m[1]}</div></li>) : (<li className="update">{m[1]}</li>) ); const Online = props => props.data.map(m => <li id={m[0]}>{m[1]}</li>); export default () => { const [room, setRoom] = useState(''); const [id, setId] = useState(''); const [socket] = useSocket('<https://open-chat-naostsaecf.now.sh>'); socket.connect(); const [messages, setMessages] = useImmer([]); const [online, setOnline] = useImmer([]); useEffect(()=>{ socket.on('message que',(nick,message) => { setMessages(draft => { draft.push([nick,message]) }) }); socket.on('update',message => setMessages(draft => { draft.push(['',message]); })) socket.on('people-list',people => { let newState = []; for(let person in people){ newState.push([people[person].id,people[person].nick]); } setOnline(draft=>{draft.push(...newState)}); console.log(online) }); socket.on('add-person',(nick,id)=>{ setOnline(draft => { draft.push([id,nick]) }) }) socket.on('remove-person',id=>{ setOnline(draft => draft.filter(m => m[0] !== id)) }) socket.on('chat message',(nick,message)=>{ setMessages(draft => {draft.push([nick,message])}) }) },0); const handleSubmit = e => { e.preventDefault(); const name = document.querySelector('#name').value.trim(); const room_value = document.querySelector('#room').value.trim(); if (!name) { return alert("Name can't be empty"); } setId(name); setRoom(document.querySelector('#room').value.trim()); console.log(room) socket.emit("join", name,room_value); }; const handleSend = e => { e.preventDefault(); const input = document.querySelector('#m'); if(input.value.trim() !== ''){ socket.emit('chat message',input.value,room); input.value = ''; } } return id ? ( <section style={{display:'flex',flexDirection:'row'}} > <ul id="messages"><Messages data={messages} /></ul> <ul id="online"> &#x1f310; : <Online data={online} /> </ul> <div id="sendform"> <form onSubmit={e => handleSend(e)} style={{display: 'flex'}}> <input id="m" /><button style={{width:'75px'}} type="submit">Send</button> </form> </div> </section> ) : ( <div style={{ textAlign: 'center', margin: '30vh auto', width: '70%' }}> <form onSubmit={event => handleSubmit(event)}> <input id="name" required placeholder="What is your name .." /><br /> <input id="room" placeholder="What is your room .." /><br /> <button type="submit">Submit</button> </form> </div> ); }; Wrapping up

That's it! We built a fully functional group chat application together! How cool is that? The complete code for the project can be found here on GitHub.

What we’ve covered in this article is merely a glimpse of how React Hooks can boost your productivity and help you build powerful applications with powerful front-end tooling. I have built a more robust chat application in this comprehensive tutorial. Follow along if you want to level up further with React Hooks.

Now that you have hands-on experience with React Hooks, use your newly gained knowledge to get even more practice! Here are a few ideas of what you can build from here:

  • A blogging platform
  • Your own version of Instagram
  • A clone of Reddit

Have questions along the way? Leave a comment and let’s make awesome things together.

The post Build a Chat App Using React Hooks in 100 Lines of Code appeared first on CSS-Tricks.

Position Sticky and Table Headers

Css Tricks - Fri, 07/12/2019 - 12:31pm

You can't position: sticky; a <thead>. Nor a <tr>. But you can sticky a <th>, which means you can make sticky headers inside a regular ol' <table>. This is tricky stuff, because if you didn't know this weird quirk, it would be hard to blame you. It makes way more sense to sticky a parent element like the table header rather than each individiaul element in a row.

The issue boils down to the fact that stickiness requires position: relative to work and that doesn't apply to <thead> and <tr> in the CSS 2.1 spec.

There are two very extreme reactions to this, should you need to implement sticky table headers and not be aware of the <th> workaround.

  • Don't use table markup at all. Instead, use different elements (<div>s and whatnot) and other CSS layout methods to replicate the style of a table, but not locked out of using position: relative and creating position: sticky parent elements.
  • Use table elements, but totally remove all their styling defaults with new display values.

The first is dangerous because you aren't using semantic and accessible elements for the content to be read and navigated. The second is almost the same. You can go that route, but need to be really careful to re-apply semantic roles.

Anyway, none of that matters if you just stick (get it?!) to using a sticky value on those <th> elements.

See the Pen
Sticky Table Headers with CSS
by Chris Coyier (@chriscoyier)
on CodePen.

It's probably a bit weird to have table headers as a row in the middle of a table, but it's just illustrating the idea. I was imagining colored header bars separating players on different sports teams or something.

Anytime I think about data tables, I also think about how tricky it can be to make them responsive. Fortunately, there are a variety of ways, all depending on the best way to group and explore the data in them.

The post Position Sticky and Table Headers appeared first on CSS-Tricks.

Color Inputs: A Deep Dive into Cross-Browser Differences

Css Tricks - Fri, 07/12/2019 - 5:09am

In this article, we'll be taking a look at the structure inside <input type='color'> elements, browser inconsistencies, why they look a certain way in a certain browser, and how to dig into it. Having a good understanding of this input allows us to evaluate whether a certain cross-browser look can be achieved and how to do so with a minimum amount of effort and code.

Here's exactly what we're talking about:

But before we dive into this, we need to get into...

Accessibility issues!

We've got a huge problem here: for those who completely rely on a keyboard, this input doesn't work as it should in Safari and in Firefox on Windows, but it does work in Firefox on Mac and Linux (which I only tested on Fedora, so feel free to yell at me in the comments if it doesn't work for you using another distribution).

In Firefox on Windows, we can Tab to the input to focus it, press Enter to bring up a dialog... which we then cannot navigate with the keyboard!

I've tried tabbing, arrow keys, and every other key available on the keyboard... nothing! I could at least close the dialog with good old Alt + F4. Later, in the bug ticket I found for this on Bugzilla, I also discovered a workaround: Alt + Tab to another window, then Alt + Tab back and the picker dialog can be navigated with the keyboard.

Things are even worse in Safari. The input isn't even focusable (bug ticket) if VoiceOver isn't on. And even when using VoiceOver, tabbing through the dialog the inputs opens is impossible.

If you'd like to use <input type='color'> on an actual website, please let browsers know this is something that needs to be solved!

How to look inside

In Chrome, we need to bring up DevTools, go to Settings and, in the Preferences section under Elements, check the Show user agent shadow DOM option.

How to view the structure inside an input in Chrome.

Then, when we return to inspect our element, we can see inside its shadow DOM.

In Firefox, we need to go to about:config and ensure the devtools.inspector.showAllAnonymousContent flag is set to true.

How to view the structure inside an input in Firefox.

Then, we close the DevTools and, when we inspect our input again, we can see inside our input.

Sadly, we don't seem to have an option for this in pre-Chromium Edge.

The structure inside

The structure revealed in DevTools differs from browser to browser, just like it does for range inputs.

In Chrome, at the top of the shadow DOM, we have a <div> wrapper that we can access using ::-webkit-color-swatch-wrapper.

Inside it, we have another <div> we can access with ::-webkit-color-swatch.

. Right at the top, we have a div which is the swatch wrapper and can be accessed using ::-webkit-color-swatch-wrapper. Inside it, there's another div which is the swatch and can be accessed using ::-webkit-color-swatch. This div has the background-color set to the value of the parent color input."/>Inner structure in Chrome.

In Firefox, we only see one <div>, but it's not labeled in any way, so how do we access it?

On a hunch, given this <div> has the background-color set to the input's value attribute, just like the ::-webkit-color-swatch component, I tried ::-moz-color-swatch. And it turns out it works!

. Unlike in Chrome, here we only have a div which is the swatch and can be accessed using ::-moz-color-swatch. This div has the background-color set to the value of the parent color input."/>Inner structure in Firefox.

However, I later learned we have a better way of figuring this out for Firefox!

We can go into the Firefox DevTools Settings and, in the Inspector section, make sure the "Show Browser Styles" option is checked. Then, we go back to the Inspector and select this <div> inside our <input type='color'>. Among the user agent styles, we see a rule set for input[type='color']::-moz-color-swatch!

Inspector > check the Show Browser styles checkbox."/>Enable viewing browser styles in Firefox DevTools.

In pre-Chromium Edge, we cannot even see what kind of structure we have inside. I gave ::-ms-color-swatch a try, but it didn't work and neither did ::-ms-swatch (which I considered because, for an input type='range', we have ::-webkit-slider-thumb and ::-moz-range thumb, but just ::-ms-thumb).

After a lot of searching, all I found was this issue from 2016. Pre-Chromium Edge apparently doesn't allow us to style whatever is inside this input. Well, that's a bummer.

How to look at the browser styles

In all browsers, we have the option of not applying any styles of our own and then looking at the computed styles.

In Chrome and Firefox, we can also see the user agent stylesheet rule sets that are affecting the currently selected element (though we need to explicitly enable this in Firefox, as seen in the previous section).

Styles in Chrome and Inspector > Styles in Firefox."/>Checking browser styles in Chrome and Firefox.

This is oftentimes more helpful than the computed styles, but there are exceptions and we should still always check the computed values as well.

In Firefox, we can also see the CSS file for the form elements at view-source:resource://gre-resources/forms.css.

Checking browser styles in Firefox. The input element itself

We'll now be taking a look at the default values of a few properties in various browsers in order to get a clear picture of what we'd really need to set explicitly in order to get a custom cross-browser result.

The first property I always think about checking when it comes to <input> elements is box-sizing. The initial value of this property is border-box in Firefox, but content-box in Chrome and Edge.

The box-sizing values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

We can see that Firefox is setting it to border-box on <input type='color'>, but it looks like Chrome isn't setting it at all, so it's left with the initial value of content-box (and I suspect the same is true for Edge).

In any event, what it all means is that, if we are to have a border or a padding on this element, we also need to explicitly set box-sizing so that we get a consistent result across all these browsers.

The font property value is different for every browser, but since we don't have text inside this input, all we really care about is the font-size, which is consistent across all browsers I've checked: 13.33(33)px. This is a value that really looks like it came from dividing 40px by 3, at least in Chrome.

The font values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

This is a situation where the computed styles are more useful for Firefox, because if we look at the browser styles, we don't get much in terms of useful information:

Sometimes the browser styles are pretty much useless (Firefox screenshot).

The margin is also consistent across all these browsers, computing to 0.

The margin values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

The border is different for every single browser. In both Chrome and Edge, we have a solid 1px one, but the border-color is different (rgb(169, 169, 169) for Chrome and rgb(112, 112, 112) for Edge). In Firefox, the border is an outset 2px one, with a border-color of... ThreeDLightShadow?!

The border values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

What's the deal with ThreeDLightShadow? If it doesn't sound familiar, don't worry! It's a (now deprecated) CSS2 system value, which Firefox on Windows shows me to be rgb(227, 227, 227) in the Computed styles tab.

Computed border-color for <input type='color'> in Firefox on Windows.

Note that in Firefox (at least on Windows), the operating system zoom level (Settings → System → Display → Scale and Layout → Change the size of text, apps and other items) is going to influence the computed value of the border-width, even though this doesn't seem to happen for any other property I've checked and it seems to be partially related to the border-style.

Zoom level options on Windows.

The strangest thing is the computed border-width values for various zoom levels don't seem to make any sense. If we keep the initial border-style: outset, we have:

  • 1.6px for 125%
  • 2px for 150%
  • 1.7px for 175%
  • 1.5px for 200%
  • 1.8px for 225%
  • 1.6px for 250%
  • 1.66667px for 300%

If we set border-style: solid, we have a computed border-width of 2px, exactly as it was set, for zoom values that are multiples of 50% and the exact same computed values as for border-style: outset for all the other zoom levels.

The padding is the same for Chrome and Edge (1px 2px), while Firefox is the odd one out again.

The padding values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

It may look like the Firefox padding is 1px. That's what it is set to and there's no indication of anything overriding it — if a property is overridden, then it's shown as grey and with a strike-through.

Spotting overrides in Firefox.

But the computed value is actually 0 8px! Moreover, this is a value that doesn't depend on the operating system zoom level. So, what the hairy heck is going on?!

isn't the one that was set on input, even if no override seems to be happening."/>Computed value for padding in Firefox doesn't match the value that was set on input.

Now, if you've actually tried inspecting a color input, took a close look at the styles set on it, and your brain works differently than mine (meaning you do read what's in front of you and don't just scan for the one thing that interests you, completely ignoring everything else...) then you've probably noticed there is something overriding the 1px padding (and should be marked as such) — the flow-relative padding!

Flow-relative padding overrides in Firefox.

Dang, who knew those properties with lots of letters were actually relevant? Thanks to Zoltan for noticing and letting me know. Otherwise, it probably would have taken me two more days to figure this one out.

This raises the question of whether the same kind of override couldn't happen in other browsers and/or for other properties.

Edge doesn't support CSS logical properties, so the answer is a "no" in that corner.

In Chrome, none of the logical properties for margin, border or padding are set explicitly for <input type='color'>, so we have no override.

Concerning other properties in Firefox, we could have found ourselves in the same situation for margin or for border, but with these two, it just so happens the flow-relative properties haven't been explicitly set for our input, so again, there's no override.

Even so, it's definitely something to watch out for in the future!

Moving on to dimensions, our input's width is 44px in Chrome and Edge and 64px in Firefox.

The width values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

Its height is 23px in all three browsers.

The height values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

Note that, since Chrome and Edge have a box-sizing of content-box, their width and height values do not include the padding or border. However, since Firefox has box-sizing set to border-box, its dimensions include the padding and border.

The layout boxes for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

This means the content-box is 44pxx23px in Chrome and Edge and 44xpxx19px in Firefox, the padding-box is 48pxx25 in Chrome and Edge and 60pxx19px in Firefox and the border-box is 50pxx27px in Chrome and Edge and 64pxx23 in Firefox.

We can clearly see how the dimensions were set in Chrome and I'd assume they were set in the same direct way in Edge as well, even if Edge doesn't allow us to trace this stuff. Firefox doesn't show these dimensions as having been explicitly set and doesn't even allow us to trace where they came from in the Computed tab (as it does for other properties like border, for example). But if we look at all the styles that have been set on input[type='color'], we discover the dimensions have been set as flow-relative ones (inline-size and block-size).

How <input type='color'> dimensions have been set in Firefox.

The final property we check for the normal state of the actual input is background. Here, Edge is the only browser to have a background-image (set to a top to bottom gradient), while Chrome and Firefox both have a background-color set to ButtonFace (another deprecated CSS2 system value). The strange thing is this should be rgb(240, 240, 240) (according to this resource), but its computed value in Chrome is rgb(221, 221, 221).

The background values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

What's even stranger is that, if we actually look at our input in Chrome, it sure does look like it has a gradient background! If we screenshot it and then use a picker, we get that it has a top to bottom gradient from #f8f8f8 to #ddd.

What the actual input looks like in Chrome. It appears to have a gradient, in spite of the info we get from DevTools telling us it doesn't.

Also, note that changing just the background-color (or another property not related to dimensions like border-radius) in Edge also changes the background-image, background-origin, border-color or border-style.

Edge: side-effects of changing background-color. Other states

We can take a look at the styles applied for a bunch of other states of an element by clicking the :hov button in the Styles panel for Chrome and Firefox and the a: button in the same Styles panel for Edge. This reveals a section where we can check the desired state(s).

Taking a look at other states in Chrome, Firefox, Edge (from top to bottom).

Note that, in Firefox, checking a class only visually applies the user styles on the selected element, not the browser styles. So, if we check :hover for example, we won't see the :hover styles applied on our element. We can however see the user agent styles matching the selected state for our selected element shown in DevTools.

Also, we cannot test for all states like this and let's start with such a state.

:disabled

In order to see how styles change in this state, we need to manually add the disabled attribute to our <input type='color'> element.

Hmm... not much changes in any browser!

In Chrome, we see the background-color is slightly different (rgb(235, 235, 228) in the :disabled state versus rgb(221, 221, 221) in the normal state).

Chrome :disabled styling.

But the difference is only clear looking at the info in DevTools. Visually, I can tell tell there's a slight difference between an input that's :disabled and one that's not if they're side-by-side, but if I didn't know beforehand, I couldn't tell which is which just by looking at them, and if I just saw one, I couldn't tell whether it's enabled or not without clicking it.

Disabled (left) versus enabled (right) <input type='color'> in Chrome.

In Firefox, we have the exact same values set for the :disabled state as for the normal state (well, except for the cursor, which realistically, isn't going to produce different results save for exceptional cases anyway). What gives, Firefox?!

in its normal state and its :disabled state. The padding and border set in the :disabled case are exactly the same as those set in the normal case."/>Firefox :disabled (top) versus normal (bottom) styling.

In Edge, both the border-color and the background gradient are different.

Edge :disabled styling (by checking computed styles).

We have the following styles for the normal state:

border-color: rgb(112, 112, 112); background-image: linear-gradient(rgb(236, 236, 236), rgb(213, 213, 213));

And for the :disabled state:

border-color: rgb(186, 186, 186); background-image: linear-gradient(rgb(237, 237, 237), rgb(229, 229, 229));

Clearly different if we look at the code and visually better than Chrome, though it still may not be quite enough:

Disabled (left) versus enabled (right) <input type='color'> in Edge. :focus

This is one state we can test by toggling the DevTools pseudo-classes. Well, in theory. In practice, it doesn't really help us in all browsers.

Starting with Chrome, we can see that we have an outline in this state and the outline-color computes to rgb(77, 144, 254), which is some kind of blue.

:focus."/>Chrome :focus styling.

Pretty straightforward and easy to spot.

Moving on to Firefox, things start to get hairy! Unlike Chrome, toggling the :focus pseudo-class from DevTools does nothing on the input element, though by focusing it (by tab click), the border becomes blue and we get a dotted rectangle within — but there's no indication in DevTools regarding what is happening.

What happens in Firefox when tabbing to our input to :focus it.

If we check Firefox's forms.css, it provides an explanation for the dotted rectangle. This is the dotted border of a pseudo-element, ::-moz-focus-inner (a pseudo-element which, for some reason, isn't shown in DevTools inside our input as ::-moz-color-swatch is). This border is initially transparent and then becomes visible when the input is focused — the pseudo-class used here (:-moz-focusring) is pretty much an old Firefox version of the new standard (:focus-visible), which is currently only supported by Chrome behind the Experimental Web Platform features flag.

Firefox: where the inner dotted rectangle on :focus comes from.

What about the blue border? Well, it appears this one isn't set by a stylesheet, but at an OS level instead. The good news is we can override all these styles should we choose to do so.

In Edge, we're faced with a similar situation. Nothing happens when toggling the :focus pseudo-class from DevTools, but if we actually tab to our input to focus it, we can see an inner dotted rectangle.

What happens in Edge when tabbing to our input to :focus it.

Even though I have no way of knowing for sure, I suspect that, just like in Firefox, this inner rectangle is due to a pseudo-element that becomes visible on :focus.

:hover

In Chrome, toggling this pseudo-class doesn't reveal any :hover-specific styles in DevTools. Furthermore, actually hovering the input doesn't appear to change anything visually. So it looks like Chrome really doesn't have any :hover-specific styles?

In Firefox, toggling the :hover pseudo-class from DevTools reveals a new rule in the styles panel:

Firefox :hover styling as seen in DevTools.

When actually hovering the input, we see the background turns light blue and the border blue, so the first thought would be that light blue is the -moz-buttonhoverface value and that the blue border is again set at an OS level, just like in the :focus case.

, it gets a light blue background and a blue border."/>What actually happens in Firefox on :hover.

However, if we look at the computed styles, we see the same background we have in the normal state, so that blue background is probably really set at an OS level as well, in spite of having that rule in the forms.css stylesheet.

Firefox: computed background-color of an <input type='color'> on :hover.

In Edge, toggling the :hover pseudo-class from DevTools gives our input a light blue (rgb(166, 244, 255)) background and a blue (rgb(38, 160, 218)) border, whose exact values we can find in the Computed tab:

Edge: computed background-color and border-color of an <input type='color'> on :hover. :active

Checking the :active state in the Chrome DevTools does nothing visually and shows no specific rules in the Styles panel. However, if we actually click our input, we see that the background gradient that doesn't even show up in DevTools in the normal state gets reversed.

What the actual input looks like in Chrome in the :active state. It appears to have a gradient (reversed from the normal state), in spite of the info we get from DevTools telling us it doesn't.

In Firefox DevTools, toggling the :active state on does nothing, but if we also toggle the :hover state on, then we get a rule set that changes the inline padding (the block padding is set to the same value of 0 it has in all other states), the border-style and sets the background-color back to our old friend ButtonFace.

Firefox :active styling as seen in DevTools.

In practice, however, the only thing that matches the info we get from DevTools is the inline shift given by the change in logical padding. The background becomes a lighter blue than the :hover state and the border is blue. Both of these changes are probably happening at an OS level as well.

, it gets a light blue background and a blue border in addition to sliding 1 pixel in the inline direction as a result of changing the inline padding."/>What actually happens in Firefox in an :active state.

In Edge, activating the :active class from DevTools gives us the exact same styles we have for the :hover state. However, if we have both the :hover and the :active states on, things change a bit. We still have a light blue background and a blue border, but both are darker now (rgb(52, 180, 227) for the background-color and rgb(0, 137, 180) for the border-color):

The computed background-color and border-color of an <input type='color'> on :active viewed in Edge.

This is the takeaway: if we want a consistent cross-browser results for <input type='color'>, we should define our own clearly distinguishable styles for all these states ourselves because, fortunately, almost all the browser defaults — except for the inner rectangle we get in Edge on :focus — can be overridden.

The swatch wrapper

This is a component we only see in Chrome, so if we want a cross-browser result, we should probably ensure it doesn't affect the swatch inside — this means ensuring it has no margin, border, padding or background and that its dimensions equal those of the actual input's content-box.

In order to know whether we need to mess with these properties (and maybe others as a result) or not, let's see what the browser defaults are for them.

Fortunately, we have no margin or border, so we don't need to worry about these.

The margin and border values for the swatch wrapper in Chrome.

We do however have a non-zero padding (of 4px 2px), so this is something we'll need to zero out if we want to achieve a consistent cross-browser result.

The padding values for the swatch wrapper in Chrome.

The dimensions are both conveniently set to 100%, which means we won't need to mess with them.

The size values for the swatch wrapper in Chrome.

Something we need to note here is that we have box-sizing set to border-box, so the padding gets subtracted from the dimensions set on this wrapper.

box-sizing value for the swatch wrapper."/>The box-sizing value for the swatch wrapper in Chrome.

This means that while the padding-box, border-box and margin-box of our wrapper (all equal because we have no margin or border) are identical to the content-box of the actual <input type='color'> (which is 44pxx23px in Chrome), getting the wrapper's content-box involves subtracting the padding from these dimensions. It results that this box is 40pxx15px.

The box model for the swatch wrapper in Chrome.

The background is set to transparent, so that's another property we don't need to worry about resetting.

The background values for the swatch wrapper in Chrome.

There's one more property set on this element that caught my attention: display. It has a value of flex, which means its children are flex items.

The display value for the swatch wrapper in Chrome. The swatch

This is a component we can style in Chrome and Firefox. Sadly, Edge doesn't expose it to allow us to style it, so we cannot change properties we might want to, such as border, border-radius or box-shadow.

The box-sizing property is one we need to set explicitly if we plan on giving the swatch a border or a padding because its value is content-box in Chrome, but border-box in Firefox.

The box-sizing values for the swatch viewed in Chrome (top) and Firefox (bottom).

Fortunately, the font-size is inherited from the input itself so it's the same.

The font-size values for the swatch viewed in Chrome (top) and Firefox (bottom).

The margin computes to 0 in both Chrome and Firefox.

The margin values for the swatch viewed in Chrome (top) and Firefox (bottom).

This is because most margins haven't been set, so they end up being 0 which is the default for <div> elements. However, Firefox is setting the inline margins to auto and we'll be getting to why that computes to 0 in just a little moment.

The inline margin for the swatch being set to auto in Firefox.

The border is solid 1px in both browsers. The only thing that differs is the border-color, which is rgb(119, 119, 119) in Chrome and grey (or rgb(128, 128, 128), so slightly lighter) in Firefox.

The border values for the swatch viewed in Chrome (top) and Firefox (bottom).

Note that the computed border-width in Firefox (at least on Windows) depends on the OS zoom level, just as it is in the case of the actual input.

The padding is luckily 0 in both Chrome and Firefox.

The padding values for the swatch viewed in Chrome (top) and Firefox (bottom).

The dimensions end up being exactly what we'd expect to find, assuming the swatch covers its parent's entire content-box.

The box model for the swatch viewed in Chrome (top) and Firefox (bottom).

In Chrome, the swatch parent is the <div> wrapper we saw earlier, whose content-box is 4pxx15px. This is equal to the margin-box and the border-box of the swatch (which coincide as we have no margin). Since the padding is 0, the content-box and the padding-box for the swatch are identical and, subtracting the 1px border, we get dimensions that are 38pxx13px.

In Firefox, the swatch parent is the actual input, whose content-box is 44pxx19px one. This is equal to the margin-box and the border-box of the swatch (which coincide as we have no margin). Since the padding is 0, the content-box and the padding-box for the swatch are identical and, subtracting the 1px border, we get that their dimensions are 42pxx17px.

In Firefox, we see that the swatch is made to cover its parent's content-box by having both its dimensions set to 100%.

The size values for the swatch viewed in Chrome (top) and Firefox (bottom).

This is the reason why the auto value for the inline margin computes to 0.

But what about Chrome? We cannot see any actual dimensions being set. Well, this result is due to the flex layout and the fact that the swatch is a flex item that's made to stretch such that it covers its parent's content-box.

The flex value for the swatch wrapper in Chrome. Final thoughts

Phew, we covered a lot of ground here! While it may seem exhaustive to dig this deep into one specific element, this is the sort of exercise that illustrates how difficult cross-browser support can be. We have our own styles, user agent styles and operating system styles to traverse and some of those are always going to be what they are. But, as we discussed at the very top, this winds up being an accessibility issue at the end of the day, and something to really consider when it comes to implementing a practical, functional application of a color input.

Remember, a lot of this is ripe territory to reach out to browser vendors and let them know how they can update their implementations based on your reported use cases. Here are the three tickets I mentioned earlier where you can either chime in or reference to create a new ticket:

The post Color Inputs: A Deep Dive into Cross-Browser Differences appeared first on CSS-Tricks.

Weekly Platform News: HTML Inspection in Search Console, Global Scope of Scripts, Babel env Adds defaults Query

Css Tricks - Thu, 07/11/2019 - 7:47am

In this week's look around the world of web platform news, Google Search Console makes it easier to view crawled markup, we learn that custom properties aren't computing hogs, variables defined at the top-level in JavaScript are global to other page scripts, and Babel env now supports the defaults query — plus all of last month's news compiled into a single package for you.

Easier HTML inspection in Google Search Console

The URL Inspection tool in Google Search Console now includes useful controls for searching within and copying the HTML code of the crawled page.

Note: The URL Inspection tool provides information about Google’s indexed version of a specific page. You can access Google Search Console at https://search.google.com/search-console.

(via Barry Schwartz)

CSS properties are computed once per element

The value of a CSS custom property is computed once per element. If you define a custom property --func on the <html> element that uses the value of another custom property --val, then re-defining the value of --val on a nested DOM element that uses --func won’t have any effect because the inherited value of --func is already computed.

html { --angle: 90deg; --gradient: linear-gradient(var(--angle), blue, red); } header { --angle: 270deg; /* ignored */ background-image: var(--gradient); /* inherited value */ }

(via Miriam Suzanne)

The global scope of scripts

JavaScript variables created via let, const, or class declarations at the top level of a script (<script> element) continue to be defined in subsequent scripts included in the page.

Note:Axel Rauschmayer calls this the global scope of scripts.")

(via Surma)

Babel env now supports the defaults query

Babel’s env preset (@babel/preset-env) now allows you to target browserslist’s default browsers (which are listed at browsersl.ist). Note that if you don’t specify your target browsers, Babel env will run every syntax transform on your code.

{ "presets": [ [ "@babel/preset-env", { "targets": { "browsers": "defaults" } } ] ] }

(via Nicolò Ribaudo)

All the June 2019 news that's fit to... print

For your convenience, I have compiled all 59 news items that I’ve published throughout June into one 10-page PDF document.

Download PDF

The post Weekly Platform News: HTML Inspection in Search Console, Global Scope of Scripts, Babel env Adds defaults Query appeared first on CSS-Tricks.

Protecting Vue Routes with Navigation Guards

Css Tricks - Thu, 07/11/2019 - 5:21am

Authentication is a necessary part of every web application. It is a handy means by which we can personalize experiences and load content specific to a user — like a logged in state. It can also be used to evaluate permissions, and prevent otherwise private information from being accessed by unauthorized users.

A common practice that applications use to protect content is to house them under specific routes and build redirect rules that navigate users toward or away from a resource depending on their permissions. To gate content reliably behind protected routes, they need to build to separate static pages. This way, redirect rules can properly handle redirects.

In the case of Single Page Applications (SPAs) built with modern front-end frameworks, like Vue, redirect rules cannot be utilized to protect routes. Because all pages are served from a single entry file, from a browser’s perspective, there is only one page: index.html. In a SPA, route logic generally stems from a routes file. This is where we will do most of our auth configuration for this post. We will specifically lean on Vue’s navigation guards to handle authentication specific routing since this helps us access selected routes before it fully resolves. Let’s dig in to see how this works.

Roots and Routes

Navigation guards are a specific feature within Vue Router that provide additional functionality pertaining to how routes get resolved. They are primarily used to handle error states and navigate a user seamlessly without abruptly interrupting their workflow.

There are three main categories of guards in Vue Router: Global Guards, Per Route Guards and In Component Guards. As the names suggest, Global Guards are called when any navigation is triggered (i.e. when URLs change), Per Route Guards are called when the associated route is called (i.e. when a URL matches a specific route), and Component Guards are called when a component in a route is created, updated or destroyed. Within each category, there are additional methods that gives you more fine grained control of application routes. Here’s a quick break down of all available methods within each type of navigation guard in Vue Router.

Global Guards
  • beforeEach: action before entering any route (no access to this scope)
  • beforeResolve: action before the navigation is confirmed, but after in-component guards (same as beforeEach with this scope access)
  • afterEach: action after the route resolves (cannot affect navigation)
Per Route Guards
  • beforeEnter: action before entering a specific route (unlike global guards, this has access to this)
Component Guards
  • beforeRouteEnter: action before navigation is confirmed, and before component creation (no access to this)
  • beforeRouteUpdate: action after a new route has been called that uses the same component
  • beforeRouteLeave: action before leaving a route
Protecting Routes

To implement them effectively, it helps to know when to use them in any given scenario. If you wanted to track page views for analytics for instance, you may want to use the global afterEach guard, since it gets fired when the route and associated components are fully resolved. And if you wanted to prefetch data to load onto a Vuex store before a route resolves, you could do so using the beforeEnter per route guard.

Since our example deals with protecting specific routes based on a user’s access permissions, we will use per component navigation guards, namely the beforeEnter hook. This navigation guard gives us access to the proper route before the resolve completes; meaning that we can fetch data or check that data has loaded before letting a user pass through. Before diving into the implementation details of how this works, let’s briefly look at how our beforeEnter hook fits into our existing routes file. Below, we have our sample routes file, which has our protected route, aptly named protected. To this, we will add our beforeEnter hook to it like so:

const router = new VueRouter({ routes: [ ... { path: "/protected", name: "protected", component: import(/* webpackChunkName: "protected" */ './Protected.vue'), beforeEnter(to, from, next) { // logic here } ] }) Anatomy of a route

The anatomy of a beforeEnter is not much different from other available navigation guards in Vue Router. It accepts three parameters: to, the “future” route the app is navigating to; from, the “current/soon past” route the app is navigating away from and next, a function that must be called for the route to resolve successfully.

Generally, when using Vue Router, next is called without any arguments. However, this assumes a perpetual success state. In our case, we want to ensure that unauthorized users who fail to enter a protected resource have an alternate path to take that redirects them appropriately. To do this, we will pass in an argument to next. For this, we will use the name of the route to navigate users to if they are unauthorized like so:

next({ name: "dashboard" })

Let’s assume in our case, that we have a Vuex store where we store a user’s authorization token. In order to check that a user has permission, we will check this store and either fail or pass the route appropriately.

beforeEnter(to, from, next) { // check vuex store // if (store.getters["auth/hasPermission"]) { next() } else { next({ name: "dashboard" // back to safety route // }); } }

In order to ensure that events happen in sync and that the route doesn’t prematurely load before the Vuex action is completed, let’s convert our navigation guards to use async/await.

async beforeEnter(to, from, next) { try { var hasPermission = await store.dispatch("auth/hasPermission"); if (hasPermission) { next() } } catch (e) { next({ name: "dashboard" // back to safety route // }) } } Never forget where you came from

So far our navigation guard fulfills its purpose of preventing unauthorized users access to protected resources by redirecting them to where they may have come from (i.e. the dashboard page). Even so, such a workflow is disruptive. Since the redirect is unexpected, a user may assume user error and attempt to access the route repeatedly with the eventual assumption that the application is broken. To account for this, let’s create a way to let users know when and why they are being redirected.

We can do this by passing in a query parameter to the next function. This allows us to append the protected resource path to the redirect URL. So, if you want to prompt a user to log into an application or obtain the proper permissions without having to remember where they left off, you can do so. We can get access to the path of the protected resource via the to route object that is passed into the beforeEnter function like so: to.fullPath.

async beforeEnter(to, from, next) { try { var hasPermission = await store.dispatch("auth/hasPermission"); if (hasPermission) { next() } } catch (e) { next({ name: "login", // back to safety route // query: { redirectFrom: to.fullPath } }) } } Notifying

The next step in enhancing the workflow of a user failing to access a protected route is to send them a message letting them know of the error and how they can solve the issue (either by logging in or obtaining the proper permissions). For this, we can make use of in component guards, specifically, beforeRouteEnter, to check whether or not a redirect has happened. Because we passed in the redirect path as a query parameter in our routes file, we now can check the route object to see if a redirect happened.

beforeRouteEnter(to, from, next) { if (to.query.redirectFrom) { // do something // } }

As I mentioned earlier, all navigation guards must call next in order for a route to resolve. The upside to the next function as we saw earlier is that we can pass an object to it. What you may not have known is that you can also access the Vue instance within the next function. Wuuuuuuut? Here’s what that looks like:

next(() => { console.log(this) // this is the Vue instance })

You may have noticed that you don’t technically have access to the this scope when using beforeEnter. Though this might be the case, you can still access the Vue instance by passing in the vm to the function like so:

next(vm => { console.log(vm) // this is the Vue instance })

This is especially handy because you can now create and appropriately update a data property with the relevant error message when a route redirect happens. Say you have a data property called errorMsg. You can now update this property from the next function within your navigation guards easily and without any added configuration. Using this, you would end up with a component like this:

<template> <div> <span>{{ errorMsg }}</span> <!-- some other fun content --> ... <!-- some other fun content --> </div> </template> <script> export default { name: "Error", data() { return { errorMsg: null } }, beforeRouteEnter(to, from, next) { if (to.query.redirectFrom) { next(vm => { vm.errorMsg = "Sorry, you don't have the right access to reach the route requested" }) } else { next() } } } </script> Conclusion

The process of integrating authentication into an application can be a tricky one. We covered how to gate a route from unauthorized access as well as how to put workflows in place that redirect users toward and away from a protected resource based on their permissions. The assumption thus far has been that you already have authentication configured in your application. If you don’t yet have this configured and you’d like to get up and running fast, I highly recommend working with authentication as a service. There are providers like Netlify’s Identity Widget or Auth0’s lock.

The post Protecting Vue Routes with Navigation Guards appeared first on CSS-Tricks.

Frontend Masters: The New, Complete Intro to React Course… Now with Hooks!

Css Tricks - Thu, 07/11/2019 - 5:19am

(This is a sponsored post.)

Much more than an intro, you’ll build a real-world app with the latest features in React including &#x1f3a3; hooks, effects, context, and portals.

We also have a complete React learning path for you to explore React even deeper!

Direct Link to ArticlePermalink

The post Frontend Masters: The New, Complete Intro to React Course… Now with Hooks! appeared first on CSS-Tricks.

The Fight Against Layout Jank

Css Tricks - Wed, 07/10/2019 - 1:06pm

A web page isn't locked in stone just because it has rendered visually. Media assets, like images, can come in and cause the layout to shift based on their size, which typically isn't known in fluid layouts until they do render. Or fonts can load and reflow layout. Or XHRs can bring in more content to be placed onto the page. We're always doing what we can to prevent the layout from shifting around — that's what I mean by layout jank. It's awkward and nobody likes it. At best, it causes you to lose your place while reading; at worst, it can mean clicking on something you really didn't mean to.

While I was trying to wrap my head around the new Layout Instability API and chatting it out with friends, Eric Portis said something characteristically smart. Basically, layout jank is a problem and it's being fought on multiple fronts:

The post The Fight Against Layout Jank appeared first on CSS-Tricks.

Types or Tests: Why Not Both?

Css Tricks - Wed, 07/10/2019 - 9:30am

Every now and then, a debate flares up about the value of typed JavaScript. "Just write more tests!" yell some opponents. "Replace unit tests with types!" scream others. Both are right in some ways, and wrong in others. Twitter affords little room for nuance. But in the space of this article we can try to lay out a reasoned argument for how both can and should coexist.

Correctness: what we all really want

It’s best to start at the end. What we really want out of all this meta-engineering at the end is correctness. I don’t mean the strict theoretical computer science definition of it, but a more general adherence of program behavior to its specification: We have an idea of how our program ought to work in our heads, and the process of programming organizes bits and bytes to make that idea into reality. Because we aren’t always precise about what we want, and because we’d like to have confidence that our program didn’t break when we made a change, we write types and tests on top of the raw code we already have to write just to make things work in the first place.

So, if we accept that correctness is what we want, and types and tests are just automated ways to get there, it would be great to have a visual model of how types and tests help us achieve correctness, and therefore understand where they overlap and where they complement each other.

A visual model of program correctness

If we imagine the entire infinite Turing-complete possible space of everything programs can ever possibly do — inclusive of failures — as a vast gray expanse, then what we want our program to do, our specification, is a very, very, very small subset of that possible space (the green diamond below, exaggerated in size for sake of showing something):

Our job in programming is to wrangle our program as close to the specification as possible (knowing, of course, we are imperfect, and our spec is constantly in motion, e.g. due to human error, new features or under-specified behavior; so we never quite manage to achieve exact overlap):

Note, again, that the boundaries of our program’s behavior also include planned and unplanned errors for the purposes of our discussion here. Our meaning of "correctness" includes planned errors, but does not include unplanned errors.

Tests and Correctness

We write tests to ensure that our program fits our expectations, but have a number of choices of things to test:

The ideal tests are the orange dots in the diagram — they accurately test that our program does overlap the spec. In this visualization, we don’t really distinguish between types of tests, but you might imagine unit tests as really small dots, while integration/end-to-end tests are large dots. Either way, they are dots, because no one test fully describes every path through a program. (In fact, you can have 100% code coverage and still not test every path because of the combinatorial explosion!)

The blue dot in this diagram is a bad test. Sure, it tests that our program works, but it doesn’t actually pin it to the underlying spec (what we really want out of our program, at the end of the day). The moment we fix our program to align closer to spec, this test breaks, giving us a false positive.

The purple dot is a valuable test because it tests how we think our program should work and identifies an area where our program currently doesn’t. Leading with purple tests and fixing the program implementation accordingly is also known as Test-Driven Development.

The red test in this diagram is a rare test. Instead of normal (orange) tests that test "happy paths" (including planned error states), this is a test that expects and verifies that "unhappy paths" fail. If this test "passes" where it should "fail," that is a huge early warning sign that something went wrong — but it is basically impossible to write enough tests to cover the vast expanse of possible unhappy paths that exist outside of the green spec area. People rarely find value testing that things that shouldn't work don't work, so they don’t do it; but it can still be a helpful early warning sign when things go wrong.

Types and Correctness

Where tests are single points on the possibility space of what our program can do, types represent categories carving entire sections from the total possible space. We can visualize them as rectangles:

We pick a rectangle to contrast the diamond representing the program, because no type system alone can fully describe our program behavior using types alone. (To pick a trivial example of this, an id that should always be a positive integer is a number type, but the number type also accepts fractions and negative numbers. There is no way to restrict a number type to a specific range, beyond a very simple union of number literals.)

Types serve as a constraint on where our program can go as you code. If our program starts to exceed the specified boundaries of your program’s types, our type-checker (like TypeScript or Flow) will simply refuse to let us compile our program. This is nice, because in a dynamic language like JavaScript, it is very easy to accidentally create a crashing program that certainly wasn’t something you intended. The simplest value add is automated null checking. If foo has no method called bar, then calling foo.bar() will cause the all-too-familiar undefined is not a function runtime exception. If foo were typed at all, this could have been caught by the type-checker while writing, with specific attribution to the problematic line of code (with autocomplete as a concomitant benefit). This is something tests simply cannot do.

We might want to write strict types for our program as though we are trying to write the smallest possible rectangle that still fits our spec. However, this has a learning curve, because taking full advantage of type systems involves learning a whole new syntax and grammar of operators and generic type logic needed to model the full dynamic range of JavaScript. Handbooks and Cheatsheets help lower this learning curve, and more investment is needed here.

Fortunately, this adoption/learning curve doesn’t have to stop us. Since type-checking is an opt-in process with Flow and configurable strictness with TypeScript (with the ability to selectively ignore troublesome lines of code), we have our pick from a spectrum of type safety. We can even model this, too:

Larger rectangles, like the big red one in the chart above, represent a very permissive adoption of a type system on your codebase — for example, allowing implicitAny and fully relying on type inference to merely restrict our program from the worst of our coding.

Moderate strictness (like the medium-size green rectangle) could represent a more faithful typing, but with plenty of escape hatches, like using explicit instances of any all over the codebase and manual type assertions. Still, the possible surface area of valid programs that don’t match our spec is massively reduced even with this light typing work.

Maximum strictness, like the purple rectangle, keeps things so tight to our spec that it sometimes finds parts of your program that don’t fit (and these are often unplanned errors in your program behavior). Finding bugs in an existing program like this is a very common story from teams converting vanilla JavaScript codebases. However, getting maximum type safety out of our type-checker likely involves taking advantage of generic types and special operators designed to refine and narrow the possible space of types for each variable and function.

Notice that we don’t technically have to write our program first before writing the types. After all, we just want our types to closely model our spec, so really we can write our types first and then backfill the implementation later. In theory, this would be Type-Driven Development; in practice, few people actually develop this way since types intimately permeate and interleave with our actual program code.

Putting them together

What we are eventually building up to is an intuitive visualization of how both types and tests complement each other in guaranteeing our program’s correctness.

Our Tests assert that our program specifically performs as intended in select key paths (although there are certain other variations of tests as discussed above, the vast majority of tests do this). In the language of the visualization we have developed, they "pin" the dark green diamond of our program to the light green diamond of our spec. Any movement away by our program breaks these tests, which makes them squawk. This is excellent! Tests are also infinitely flexible and configurable for the most custom of use cases.

Our Types assert that our program doesn’t run away from us by disallowing possible failure modes beyond a boundary that we draw, hopefully as tightly as possible around our spec. In the language of our visualization, they "contain" the possible drift of our program away from our spec (as we are always imperfect, and every mistake we make adds additional failure behavior to our program). Types are also blunt, but powerful (because of type inference and editor tooling) tools that benefit from a strong community supplying types you don’t have to write from scratch.

In short:

  • Tests are best at ensuring happy paths work.
  • Types are best at preventing unhappy paths from existing.

Use them together based on their strengths, for best results!

If you’d like to read more about how Types and Tests intersect, Gary Bernhardt’s excellent talk on Boundaries and Kent C. Dodds’ Testing Trophy were significant influences in my thinking for this article.

The post Types or Tests: Why Not Both? appeared first on CSS-Tricks.

Introducing Netlify Analytics

Css Tricks - Wed, 07/10/2019 - 12:05am

You work a while on a side project. You think it's pretty cool! You decide to release it into the world. And then… it goes well. Or it doesn’t go well. Wait, is that right? You forgot to add analytics — it just didn’t cross your mind at the time. Now you’re pretty curious how many people have been visiting the site, but… you’re not sure. Enter Netlify Analytics.

There are so many times where I:

  • Forget to add analytics
  • Don’t want to incur the extra page weight, or
  • I'm concerned with privacy issues

I released a CSS Grid Generator last month and I forgot to add analytics. The release went well, but now it's a bit of a black box for me as far what happened there or if I need to adjust a release in the future. Now, however, I can enable Netlify Analytics and see into the past without having lost any information. Sweet.

Netlify Analytics doesn’t have a ton of bells and whistles — it’s not meant to be a replacement for super comprehensive marketing tools. But if you want to get some data about your site without adding a lot of scripts, it can be a handy tool.

One really nice thing about it is the accuracy. Since the data is coming from the server, you can have a clear picture of what the server actually served, rather than relying on a third party which might have varied reporting due to things like add blockers that can skew client-side reporting (15% of users are estimated to use tools like Ghostery, for instance), caching, and other factors.

The Analytics Dashboard

The dashboard for each site shows some “at a glance” information:

Then you can dive into more detailed information by specific date:

There’s a bit of information from top sources and top pages:

There's an area for "Top Resources Not Found", which shows any pages, images, anything that your visitors are trying and failing to retrieve from your site. When I enabled it on mine, I was able to fix a broken resource that I had long forgotten about.

It’s going to be awesome being able to check how some of my dev projects are doing. But I'm also really excited to take that extra implementation step out of my work. The caveats to keep in mind is that your site needs to be hosted by Netlify in order to use the Analytics tools, and it's a paid feature. Any site you enable will show up to 90 days (3 billing cycles) in the “Bandwidth used” chart, and up to 30 days in all other charts if it’s old enough, however it could take up to 2 days between when you enable analytics and when your dashboard is calculated and populated.

Under the hood

The analytics dashboard itself is built with React and Highcharts. Highcharts is a JavaScript charting library that includes responsive options and an accessibility module. All of the components consume data from our internal analytics API.

Before development began, we conducted an internal comparison survey of data visualization libraries in order to choose the best one for our needs. We landed on Highcharts over other popular options like d3.js, primarily because it means any engineer at Netlify with JavaScript experience can jump in and contribute, whether they have deep SVG and D3-specific knowledge or not.

While the charts themselves are rendered as SVG elements, Highcharts allows you to render any text inside the graph using HTML, simplifying and speeding our development time and allowing us to use CSS for advanced styling. The Highcharts docs are also top notch and offer a ton of customization options through their declarative API.

We used the Highcharts wrapper for React in order to create reusable React components for each type of graph. The "Top sources," "Top pages," and "Top resources not found" cards use a different component that displays a <table> using the data passed in as props.

One of the trickier challenges we encountered on the UI side while building these graphs was displaying dates along the X axis of the area charts in a way that wouldn't look overwhelming.

Highcharts offers an option to customize the format of an axis label using a JavaScript callback function, so we hooked into that to display every other date as a label. From there, we wrote an algorithm to capture the first date of each month that was being displayed and add the month name into the markup for the label, making the UI a bit cleaner and easier to digest.

Other Analytics Alternatives, with Snippets

If you’d still like to run third-party scripts and other kind of analytics, Netlify has capabilities to add something globally to <head> or <body> tags. This is useful because, depending on how your site is set up, it can be a bit of a pain to add third-party scripts to every page. Plus, sometimes you want to give the ability to change these scripts to someone who doesn't have access to the repo. Go to the particular site in the dashboard, then Settings → Build & Deploy → Post processing.

That's where you will find Snippet Injection:

Click "Add snippet" and you’ll be able to select whether you want to add the third-party snippet to the <body> or the <head> tag, and you’ll have a change to post your code in HTML. For example, if you need to add Google Analytics, you’d wrap it in a script tag like this:

<script async src="https://www.googletagmanager.com/gtag/js?id=UA-68528-29"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-XXXXX-XX'); </script>

You’ll also name it so that you can keep track of it. If you need to add more later, this is helpful.

That’s it!

You’re off and running with either the new Netlify Analytics offering that’s built-in or a more robust tool.

The post Introducing Netlify Analytics appeared first on CSS-Tricks.

Developing a robust font loading strategy for CSS-Tricks

Css Tricks - Tue, 07/09/2019 - 1:40pm

Zach Leatherman worked closely with Chris to figure out the font loading strategy for this very website you're reading. Zach walks us through the design in this write-up and shares techniques that can be applied to other projects.

Spoiler alert: Font loading is a complex and important part of a project.

The really interesting part of this post is the way that Zach talks about changing the design based on what’s best for the codebase — or as Harry Roberts calls it, “normalising the design.” Is a user really going to notice the difference between font-weight: 400 and font-weight: 500? Well, if we can ditch a whole font file, then that could have a significant impact on performance which, in turn, improves the user experience.

I guess the conversation can instead be framed like this: Does the user experience of this font outweigh the user experience of a slightly faster website?

And this isn’t a criticism of the design at all! I think Zach shows us what a healthy relationship between designers and developers can look like: collaborating and making joint decisions based on the context and the problem at hand, rather than treating static mockups as the final, concrete source of truth.

Direct Link to ArticlePermalink

The post Developing a robust font loading strategy for CSS-Tricks appeared first on CSS-Tricks.

IndieWeb and Webmentions

Css Tricks - Tue, 07/09/2019 - 11:39am

The IndieWeb is a thing! They've got a conference coming up and everything. The New Yorker is even writing about it:

Proponents of the IndieWeb offer a fairly straightforward analysis of our current social-media crisis. They frame it in terms of a single question: Who owns the servers? The bulk of our online activity takes places on servers owned by a small number of massive companies. Servers cost money to run. If you’re using a company’s servers without paying for the privilege, then that company must be finding other ways to “extract value” from you.

As I understand it, the core concept is running your own website where you are completely in control, as opposed to publishing content you create on a third-party website. It doesn't say don't use third-party websites. It just says syndicate there if you want to, but make your own website the canonical source.

Don't tweet, but instead write a short blog post and auto-post it to Twitter. Don't blog on Medium, but instead write on your own blog and plop it over to Medium. Like that. In that way, you're getting the value of those services without giving anything up.

I can tell you that running my own website has done nothing but good things for me and I'm not alone. Check out what Khoi Vinh says it's done for him:

It’s hard to overstate how important my blog has been, but if I were to try to distill it down into one word, it would be: “amplifier.” Writing in general and the blog in particular has amplified everything that I’ve done in my career, effectively broadcasting my career in ways that just wouldn’t have happened otherwise.

I do the "have a website" part, but I don't do all I could in the syndication department. I don't cross-post to Medium or anywhere else. Would this site would be more successful if I did? I dunno, but with Hacker Noon leaving, freeCodeCamp having left, and Signal vs. Nose having left, I'm not particularly interested in experimenting there. That's not apples-to-apples, though, because IndieWeb style dictates it would be syndicated to other places, like a re-post instead of the original — not a home for the originals. Still.

In the case of syndication, I'd worry a little that the SEO elsewhere would trump my own, and I'd be relying on a <link rel="canoncial"> which is something Medium apparently only supports, but via importing tools. (I didn't see it while poking around the editor.) I guess that's why you see those lines at the end of posts that say, "This article originally published on blahblahblah.com" all the time.

I'm often of two worlds. I like seeing numbers. It's useful for me to know what people like and connect with, and as someone who has sponsorship partners, some knowledge of traffic and engagement is required. Yet I get the perspective of folks who don't care about that. Om Malik writes about a renewed focus on his own blog:

My first decree was to eschew any and all analytics. I don’t want to be driven by “views,” or what Google deems worthy of rank. I write what pleases me, not some algorithm. Walking away from quantification of my creativity was an act of taking back control.

What I dwell on the most regarding syndication is the Twitter stuff. I look back at the analytics on this site at the end of every year and look at where the traffic came from — every year, Twitter is a teeny-weeny itty-bitty slice of the pie. Measuring traffic alone, that's nowhere near the amount of effort we put into making the stuff we're tweeting there. I always rationalize it to myself in other ways. I feel like Twitter is one of the major ways I stay updated with the industry and it's a major source of ideas for articles.

But can't I get all that without having Twitter be the isolated place where all that link-sharing and article commentary is done? Sure, I could and probably should do it IndieWeb-style. Then I talk myself out of it because it's more technical debt, and I end up worrying that the style of a tweet is rather unique to tweets, and that it doesn't translate into a blog post purely one-to-one.

Anyway.

There is this other IndieWeb "building block" (as Jeremey calls it) called webmentions. As far as I understand it, it's a POST-based system. I write something with a URL that points at your site, a webmention POSTs to you to let you know that happened, and you do with it what you will — probably save it to a data store and display it on your site like a comment. That's kinda rad for a couple of reasons:

  • Your post becomes the canonical home to a discussion around it. That way, if people are tweeting about it or writing responses anywhere, they aren't lost, but rather all together.
  • It encourages other people to use their own website to respond to you. A social web, but everybody with their own homes that they own and control.

If you're a WordPress person, perhaps this sounds a lot like Pingbacks? Indeed, they are. WordPress has long had "pingbacks" and "trackbacks" as a way to do essentially what I've just described. Jon Penland has a pretty good article comparing the three of them. Webmentions certainly seems like the simplest and best, and there is a plugin for them. I just might give it a shot. I have pingbacks and trackbacks turned off on the site right now because all it does is make the comment threads full of scraper site spam. Time will tell if webmentions end up abused in the same kind of way.

Webmentions are also this two-way street. You set up your site to accept these POSTs, which is straightforward enough — the most basic implementation could be a <form> you put on your own site — but then I would think being a good webmentions citizen would require you to be POSTing to other people's site when you write about them. That's a little trickier to pull off.

The plugin supports that, but like all things IndieWeb, I imagine most people are hand-rolling their sites. Remy has built a little service just for this. After publishing, you hit the service with the URL to what you wrote. It reads it (expecting a specific format), finds the links in what you've written, and hits those URLs with webmentions.

I guess that's what's so cool about all this IndieWeb stuff. Like a Progressive Web App, every step you take towards it is useful. The more people that do it, the better it gets for everyone, but it's useful anyway.

The post IndieWeb and Webmentions appeared first on CSS-Tricks.

Animating with Clip-Path

Css Tricks - Tue, 07/09/2019 - 4:53am

clip-path is one of those CSS properties we generally know is there but might not reach for often for whatever reason. It’s a little intimidating in the sense that it feels like math class because it requires working with geometric shapes, each with different values that draw certain shapes in certain ways.

We’re going to dive right into clip-path in this article, specifically looking at how we can use it to create pretty complex animations. I hope you’ll see just how awesome the property and it’s shape-shifting powers can be.

But first, let’s do a quick recap of what we’re working with.

Clip-path crash course

Just for a quick explanation as to what the clip-path is and what it provides, MDN describes it like this:

The clip-path CSS property creates a clipping region that sets what part of an element should be shown. Parts that are inside the region are shown, while those outside are hidden.

Consider the circle shape provided by clip-path. Once the circle is defined, the area inside it can be considered "positive" and the area outside it "negative." The positive space is rendered while the negative space is removed. Taking advantage of the fact that this relationship between positive and negative space can be animated provides for interesting transition effects… which is what we’re getting into in just a bit.

clip-path comes with four shapes out of the box, plus the ability to use a URL to provide a source to some other SVG <clipPath> element. I’ll let the CSS-Tricks almanac go into deeper detail, but here are examples of those first four shapes.

Shape Example Result Circle clip-path: circle(25% at 25% 25%); Ellipse clip-path: ellipse(25% 50% at 25% 50%); Inset clip-path: inset(10% 20% 30% 40% round 25%); Polygon clip-path: polygon(50% 25%, 75% 75%, 25% 75%); Combining clippings with CSS transitions

Animating clip-path can be as simple as changing the property values from one shape to another using CSS transitions, triggered either by changing classes in JavaScript or an interactive change in state, like :hover:

.box { clip-path: circle(75%); transition: clip-path 1s; } .box:hover { clip-path: circle(25%); }

See the Pen
clip-path with transition
by Geoff Graham (@geoffgraham)
on CodePen.

We can also use CSS animations:

@keyframes circle { 0% { clip-path: circle(75%); } 100% { clip-path: circle(25%); } }

See the Pen
clip-path with CSS animation
by Geoff Graham (@geoffgraham)
on CodePen.

Some things to consider when animating clip-path:

  • It only affects what is rendered and does not change the box size of the element as relating to elements around it. For example, a floated box with text flowing around it will still take up the same amount of space even with a very small percentage clip-path applied to it.
  • Any CSS properties that extend beyond the box size of the element may be clipped. For example, an inset of 0% for all four sides that clips at the edges of the element will remove a box-shadow and require a negative percentage to see the box-shadow. Although, that could lead to interesting effects in of itself!

OK, let’s get into some light animation to kick things off.

Comparing the simple shapes

I’ve put together a demo where you can see each shape in action, along with a little explanation describing what’s happening.

See the Pen
Animating Clip-Path: Simple Shapes
by Travis Almand (@talmand)
on CodePen.

The demo makes use of Vue for the functionality, but the CSS is easily transferred to any other type of project.

We can break those out a little more to get a handle on the values for each shape and how changing them affects the movement

Circle clip-path: circle(<length|percentage> at <position>);

Circle accepts two properties that can be animated:

  1. Shape radius: can be a length or percentage
  2. Position: can be a length or percentage along the x and y axis
.circle-enter-active { animation: 1s circle reverse; } .circle-leave-active { animation: 1s circle; } @keyframes circle { 0% { clip-path: circle(75%); } 100% { clip-path: circle(0%); } }

The circle shape is resized in the leave transition from an initial 75% radius (just enough to allow the element to appear fully) down to 0%. Since no position is set, the circle defaults to the center of the element both vertically and horizontally. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

Ellipse clip-path: ellipse(<length|percentage>{2} at <position>);

Ellipse accepts three properties that can be animated:

  1. Shape radius: can be a length or percentage on the horizontal axis
  2. Shape radius: can be a length or percentage on the vertical axis
  3. Position: can be a length or percentage along the x and y axis
.ellipse-enter-active { animation: 1s ellipse reverse; } .ellipse-leave-active { animation: 1s ellipse; } @keyframes ellipse { 0% { clip-path: ellipse(80% 80%); } 100% { clip-path: ellipse(0% 20%); } }

The ellipse shape is resized in the leave transition from an initial 80% by 80%, which makes it a circular shape larger than the box, down to 0% by 20%. Since no position is set, the ellipse defaults to the center of the box both vertically and horizontally. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

The effect is a shrinking circle that changes to a shrinking ellipse taller than wide wiping away the first element. Then the elements switch with the second element appearing inside the growing ellipse.

Inset clip-path: inset(<length|percentage>{1,4} round <border-radius>{1,4});

The inset shape has up to five properties that can be animated. The first four represent each edge of the shape and behave similar to margins or padding. The first property is required while the next three are optional depending on the desired shape.

  1. Length/Percentage: can represent all four sides, top/bottom sides, or top side
  2. Length/Percentage: can represent left/right sides or right side
  3. Length/Percentage: represents the bottom side
  4. Length/Percentage: represents the left side
  5. Border radius: requires the "round" keyword before the value

One thing to keep in mind is that the values used are reversed from typical CSS usage. Defining an edge with zero means that nothing has changed, the shape is pushed outward to the element’s side. As the number is increased, say to 10%, the edge of the shape is pushed inward away from the element’s side.

.inset-enter-active { animation: 1s inset reverse; } .inset-leave-active { animation: 1s inset; } @keyframes inset { 0% { clip-path: inset(0% round 0%); } 100% { clip-path: inset(50% round 50%); } }

The inset shape is resized in the leave transition from a full-sized square down to a circle because of the rounded corners changing from 0% to 50%. Without the round value, it would appear as a shrinking square. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

The effect is a shrinking square that shifts to a shrinking circle wiping away the first element. After the elements switch the second element appears within the growing circle that shifts to a growing square.

Polygon clip-path: polygon(<length|percentage>);

The polygon shape is a somewhat special case in terms of the properties it can animate. Each property represents vertices of the shape and at least three is required. The number of vertices beyond the required three is only limited by the requirements of the desired shape. For each keyframe of an animation, or the two steps in a transition, the number of vertices must always match for a smooth animation. A change in the number of vertices can be animated, but will cause a popping in or out effect at each keyframe.

.polygon-enter-active { animation: 1s polygon reverse; } .polygon-leave-active { animation: 1s polygon; } @keyframes polygon { 0% { clip-path: polygon(0 0, 50% 0, 100% 0, 100% 50%, 100% 100%, 50% 100%, 0 100%, 0 50%); } 100% { clip-path: polygon(50% 50%, 50% 25%, 50% 50%, 75% 50%, 50% 50%, 50% 75%, 50% 50%, 25% 50%); } }

The eight vertices in the polygon shape make a square with a vertex in the four corners and the midpoint of all four sides. On the leave transition, the shape’s corners animate inwards to the center while the side’s midpoints animate inward halfway to the center. The enter transition plays the animation in reverse by means of the "reverse" keyword in the animation property.

The effect is a square that collapses inward down to a plus shape that wipes away the element. The elements then switch with the second element appears in a growing plus shape that expands into a square.

Let’s get into some simple movements

OK, we’re going to dial things up a bit now that we’ve gotten past the basics. This demo shows various ways to have movement in a clip-path animation. The circle and ellipse shapes provide an easy way to animate movement through the position of the shape. The inset and polygon shapes can be animated in a way to give the appearance of position-based movement.

See the Pen
Animating Clip-Path: Simple Movements
by Travis Almand (@talmand)
on CodePen.

Let’s break those out just like we did before.

Slide Down

The slide down transition consists of two different animations using the inset shape. The first, which is the leave animation, animates the top value of the inset shape from 0% to 100% providing the appearance of the entire square sliding downward out of view. The second, which is the enter animation, has the bottom value at 100% and then animates it down towards 0% providing the appearance of the entire square sliding downward into view.

.down-enter-active { animation: 1s down-enter; } .down-leave-active { animation: 1s down-leave; } @keyframes down-enter { 0% { clip-path: inset(0 0 100% 0); } 100% { clip-path: inset(0); } } @keyframes down-leave { 0% { clip-path: inset(0); } 100% { clip-path: inset(100% 0 0 0); } }

As you can see, the number of sides being defined in the inset path do not need to match. When the shape needs to be the full square, a single zero is all that is required. It can then animate to the new state even when the number of defined sides increases to four.

Box-Wipe

The box-wipe transition consists of two animations, again using the inset shape. The first, which is the leave animation, animates the entire square down to a half-size squared positioned on the element’s left side. The smaller square then slides to the right out of view. The second, which is the enter animation, animates a similar half-size square into view from the left over to the element’s right side. Then it expands outward to reveal the entire element.

.box-wipe-enter-active { animation: 1s box-wipe-enter; } .box-wipe-leave-active { animation: 1s box-wipe-leave; } @keyframes box-wipe-enter { 0% { clip-path: inset(25% 100% 25% -50%); } 50% { clip-path: inset(25% 0% 25% 50%); } 100% { clip-path: inset(0); } } @keyframes box-wipe-leave { 0% { clip-path: inset(0); } 50% { clip-path: inset(25% 50% 25% 0%); } 100% { clip-path: inset(25% -50% 25% 100%); } }

When the full element is shown, the inset is at zero. The 50% keyframes define a half-size square that is placed on either the left or right. There are two values representing the left and right edges are swapped. Then the square is then moved to the opposite side. As one side is pushed to 100%, the other must go to -50% to maintain the shape. If it were to animate to zero instead of -50%, then the square would shrink as it animated across instead of moving out of view.

Rotate

The rotate transition is one animation with five keyframes using the polygon shape. The initial keyframe defines the polygon with four vertices that shows the entire element. Then, the next keyframe changes the x and y coordinates of each vertex to be moved inward and near the next vertex in a clockwise fashion. After all four vertices have been transitioned, it appears the square has shrunk and rotated a quarter turn. The following keyframes do the same until the square is collapsed down to the center of the element. The leave transition plays the animation normally while the enter transition plays the animation in reverse.

.rotate-enter-active { animation: 1s rotate reverse; } .rotate-leave-active { animation: 1s rotate; } @keyframes rotate { 0% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%); } 25% { clip-path: polygon(87.5% 12.5%, 87.5% 87.5%, 12.5% 87.5%, 12.5% 12.5%); } 50% { clip-path: polygon(75% 75%, 25% 75%, 25% 25%, 75% 25%); } 75% { clip-path: polygon(37.5% 62.5%, 37.5% 37.5%, 62.5% 37.5%, 62.5% 62.5%); } 100% { clip-path: polygon(50% 50%, 50% 50%, 50% 50%, 50% 50%); } }

Polygons can be animated into any other position once its vertices have been set, as long as each keyframe has the same number of vertices. This can make many interesting effects with careful planning.

Spotlight

The spotlight transition is one animation with five keyframes using the circle shape. The initial keyframe defines a full-size circle positioned at the center to show the entire element. The next keyframe shrinks the circle down to twenty percent. Each following keyframe animates the position values of the circle to move it to different points on the element until it moves out of view to the left. The leave transition plays the animation normally while the enter transition plays the animation in reverse.

.spotlight-enter-active { animation: 2s spotlight reverse; } .spotlight-leave-active { animation: 2s spotlight; } @keyframes spotlight { 0% { clip-path: circle(100% at 50% 50%); } 25% { clip-path: circle(20% at 50% 50%); } 50% { clip-path: circle(20% at 12% 84%); } 75% { clip-path: circle(20% at 93% 51%); } 100% { clip-path: circle(20% at -30% 20%); } }

This may be a complex-looking animation at first, but turns out it only requires simple changes in each keyframe.

More adventurous stuff

Like the shapes and simple movements examples, I’ve made a demo that contains more complex animations. We’ll break these down individually as well.

See the Pen
Animating Clip-Path: Complex Shapes
by Travis Almand (@talmand)
on CodePen.

All of these examples make heavy use of the polygon shape. They take advantage of features like stacking vertices to make elements appear "welded" and repositioning vertices around for movement.

Check out Ana Tudor’s "Cutting out the inner part of an element using clip-path" article for a more in-depth example that uses the polygon shape to create complex shapes.

Chevron

The chevron transition is made of two animations, each with three keyframes. The leave transition starts out as a full square with six vertices; there are the four corners but there are an additional two vertices on the left and right sides. The second keyframe animates three of the vertices into place to change the square into a chevron. The third keyframe then moves the vertices out of view to the right. After the elements switch, the enter transition starts with the same chevron shape but it is out of view on the left. The second keyframe moves the chevron into view and then the third keyframe restores the full square.

.chevron-enter-active { animation: 1s chevron-enter; } .chevron-leave-active { animation: 1s chevron-leave; } @keyframes chevron-enter { 0% { clip-path: polygon(-25% 0%, 0% 50%, -25% 100%, -100% 100%, -75% 50%, -100% 0%); } 75% { clip-path: polygon(75% 0%, 100% 50%, 75% 100%, 0% 100%, 25% 50%, 0% 0%); } 100% { clip-path: polygon(100% 0%, 100% 50%, 100% 100%, 0% 100%, 0% 50%, 0% 0%); } } @keyframes chevron-leave { 0% { clip-path: polygon(100% 0%, 100% 50%, 100% 100%, 0% 100%, 0% 50%, 0% 0%); } 25% { clip-path: polygon(75% 0%, 100% 50%, 75% 100%, 0% 100%, 25% 50%, 0% 0%); } 100% { clip-path: polygon(175% 0%, 200% 50%, 175% 100%, 100% 100%, 125% 50%, 100% 0%) } } Spiral

The spiral transition is a strong example of a complicated series of vertices in the polygon shape. The polygon is created to define a shape that spirals inward clockwise from the upper-left of the element. Since the vertices create lines that stack on top of each other, it all appears as a single square. Over the eight keyframes of the animation, vertices are moved to be on top of neighboring vertices. This makes the shape appear to unwind counter-clockwise to the upper-left, wiping away the element during the leave transition. The enter transition replays the animation in reverse.

.spiral-enter-active { animation: 1s spiral reverse; } .spiral-leave-active { animation: 1s spiral; } @keyframes spiral { 0% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 75% 25%, 75% 75%, 25% 75%, 25% 50%, 50% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 14.25% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 75% 25%, 75% 75%, 50% 75%, 50% 50%, 50% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 28.5% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 75% 25%, 75% 50%, 50% 50%, 50% 50%, 50% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 42.75% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 25%, 25% 25%, 25% 50%, 25% 50%, 25% 50%, 25% 50%, 25% 50%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 57% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 25% 75%, 75% 75%, 75% 25%, 0% 25%); } 71.25% { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 75% 100%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 75%, 75% 25%, 0% 25%); } 85.5% { clip-path: polygon(0% 0%, 100% 0%, 100% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 75% 25%, 0% 25%); } 100% {clip-path: polygon(0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 0%, 0% 25%, 0% 25%, 0% 25%, 0% 25%, 0% 25%, 0% 25%, 0% 25%); } } Slots

The slots transition is made of a series of vertices arranged in a pattern of vertical slots with vertices stacked on top of each other for a complete square. The general idea is that the shape starts in the upper-left and the next vertex is 14% to the right. Next vertex is in the exact same spot. Then the one after that is another 14% to the right, and so on until the upper-right corner is reached. This creates a series of "sections" along the top of the shape that are aligned horizontally. The second keyframe then animates every even section downward to the bottom of the element. This gives the appearance of vertical slots wiping away their parts of the element. The third keyframe then moves the remaining sections at the top to the bottom. Overall, the leave transition wipes away half the element in vertical slots and then the other half. The enter transition reverses the animation.

.slots-enter-active { animation: 1s slots reverse; } .slots-leave-active { animation: 1s slots; } @keyframes slots { 0% { clip-path: polygon(0% 0%, 14% 0%, 14% 0%, 28% 0%, 28% 0%, 42% 0%, 42% 0%, 56% 0%, 56% 0%, 70% 0%, 70% 0%, 84% 0%, 84% 0%, 100% 0, 100% 100%, 0% 100%); } 50% { clip-path: polygon(0% 0%, 14% 0%, 14% 100%, 28% 100%, 28% 0%, 42% 0%, 42% 100%, 56% 100%, 56% 0%, 70% 0%, 70% 100%, 84% 100%, 84% 0%, 100% 0, 100% 100%, 0% 100%); } 100% { clip-path: polygon(0% 100%, 14% 100%, 14% 100%, 28% 100%, 28% 100%, 42% 100%, 42% 100%, 56% 100%, 56% 100%, 70% 100%, 70% 100%, 84% 100%, 84% 100%, 100% 100%, 100% 100%, 0% 100%); } } Shutters

The shutters transition is very similar to the slots transition above. Instead of sections along the top, it creates vertical sections that are placed in line with each other to create the entire square. Starting with the upper-left the second vertex is positioned at the top and 20% to the right. The next vertex is placed in the same place horizontally but is at the bottom of the element. The next vertex after that is in the same spot with the next one back at the top on top of the vertex from two steps ago. This is repeated several times across the element until the right side is reached. If the lines of the shape were visible, then it would appear as a series of vertical sections lined up horizontally across the element. During the animation the left side of each section is moved over to be on top of the right side. This creates a wiping effect that looks like vertical shutters of a window. The enter transition plays the animation in reverse.

.shutters-enter-active { animation: 1s shutters reverse; } .shutters-leave-active { animation: 1s shutters; } @keyframes shutters { 0% { clip-path: polygon(0% 0%, 20% 0%, 20% 100%, 20% 100%, 20% 0%, 40% 0%, 40% 100%, 40% 100%, 40% 0%, 60% 0%, 60% 100%, 60% 100%, 60% 0%, 80% 0%, 80% 100%, 80% 100%, 80% 0%, 100% 0%, 100% 100%, 0% 100%); } 100% { clip-path: polygon(20% 0%, 20% 0%, 20% 100%, 40% 100%, 40% 0%, 40% 0%, 40% 100%, 60% 100%, 60% 0%, 60% 0%, 60% 100%, 80% 100%, 80% 0%, 80% 0%, 80% 100%, 100% 100%, 100% 0%, 100% 0%, 100% 100%, 20% 100%); } } Star

The star transition takes advantage of how clip-path renders positive and negative space when the lines defining the shape overlap and cross each other. The shape starts as a square with eight vertices; one in each corner and one on each side. There are only three keyframes but there’s a large amount of movement in each one. The leave transition starts with the square and then moves each vertex on a side to the opposite side. Therefore, the top vertex goes to the bottom, the bottom vertex goes to the top, and the vertices on the left and right do the same swap. This creates criss-crossing lines that form a star shape in the positive space. The final keyframe then moves the vertices in each corner to the center of the shape which makes the star collapse in on itself wiping the element away. The enter transition plays the same in reverse.

.star-enter-active { animation: 1s star reverse; } .star-leave-active { animation: 1s star; } @keyframes star { 0% { clip-path: polygon(0% 0%, 50% 0%, 100% 0%, 100% 50%, 100% 100%, 50% 100%, 0% 100%, 0% 50%); } 50% { clip-path: polygon(0% 0%, 50% 100%, 100% 0%, 0% 50%, 100% 100%, 50% 0%, 0% 100%, 100% 50%); } 100% { clip-path: polygon(50% 50%, 50% 100%, 50% 50%, 0% 50%, 50% 50%, 50% 0%, 50% 50%, 100% 50%); } } Path shapes

OK, so we’ve looked at a lot of examples of animations using clip-path shape functions. One function we haven’t spent time with is path. It’s perhaps the most flexible of the bunch because we can draw custom, or even multiple, shapes with it. Chris has written and even spoken on it before.

So, while I created demo for this set of examples as well, note that clip-path paths are experimental technology. As of this writing, it’s only available in Firefox 63 or higher behind the layout.css.clip-path-path.enabled flag, which can be enabled in about:config.

See the Pen
Animating Clip-Path: Path Shapes
by Travis Almand (@talmand)
on CodePen.

This demo shows several uses of paths that are animated for transitions. The paths are the same type of paths found in SVG and can be lifted from the path attribute to be used in the clip-path CSS property on an element. Each of the paths in the demo were actually taken from SVG I made by hand for each keyframe of the animations. Much like animating with the polygon shape, careful planning is required as the number of vertices in the path cannot change but only manipulated.

An advantage to using paths is that it can consist of multiple shapes within the path, each animated separately to have fine-tune control over the positive and negative space. Another interesting aspect is that the path supports Bézier curves. Creating the vertices is similar to the polygon shape, but polygon doesn’t support Bézier curves. A bonus of this feature is that even the curves can be animated.

That said, a disadvantage is that a path has to be built specifically for the size of the element. That’s because there is no percentage-based placement, like we have with the other clip-path shapes . So, all the demos for this article have elements that are 200px square, and the paths in this specific demo are built for that size. Any other size or dimensions will lead to different outcomes.

Alright, enough talk. Let’s get to the examples because they’re pretty sweet.

Iris

The iris transition consists of four small shapes that form together to make a complete large shape that splits in an iris pattern, much like a sci-fi type door. Each shape has its vertices moved and slightly rotated in the direction away from the center to move off their respective side of the element. This is done with only two keyframes. The leave transition has the shapes move out of view while the enter transition reverses the effect. The path is formatted in a way to make each shape in the path obvious. Each line that starts with "M" is a new shape in the path.

.iris-enter-active { animation: 1s iris reverse; } .iris-leave-active { animation: 1s iris; } @keyframes iris { 0% { clip-path: path(' M103.13 100C103 32.96 135.29 -0.37 200 0L0 0C0.35 66.42 34.73 99.75 103.13 100Z M199.35 200C199.83 133.21 167.75 99.88 103.13 100C102.94 165.93 68.72 199.26 0.46 200L199.35 200Z M103.13 100C167.46 99.75 199.54 133.09 199.35 200L200 0C135.15 -0.86 102.86 32.47 103.13 100Z M0 200C68.63 200 103 166.67 103.13 100C34.36 100.12 -0.02 66.79 0 0L0 200Z '); } 100% { clip-path: path(' M60.85 2.56C108.17 -44.93 154.57 -45.66 200.06 0.35L58.64 -141.07C11.93 -93.85 12.67 -45.97 60.85 2.56Z M139.87 340.05C187.44 293.16 188.33 246.91 142.54 201.29C95.79 247.78 48.02 247.15 -0.77 199.41L139.87 340.05Z M201.68 61.75C247.35 107.07 246.46 153.32 199.01 200.5L340.89 59.54C295.65 13.07 249.25 13.81 201.68 61.75Z M-140.61 141.25C-92.08 189.78 -44.21 190.51 3.02 143.46C-45.69 94.92 -46.43 47.05 0.81 -0.17L-140.61 141.25Z '); } } Melt

The melt transition consists of two different animations for both entering and leaving. In the leave transition, the path is a square but the top side is made up of several Bézier curves. At first, these curves are made to be completely flat and then are animated downward to stop beyond the bottom of the shape. As these curves move downward, they are animated in different ways so that each curve adjusts differently than the others. This gives the appearance of the element melting out of view below the bottom.

The enter transition does much the same, except that the curves are on the bottom of the square. The curves start at the top and are completely flat. Then they are animated downward with the same curve adjustments. This gives the appearance of the second element melting into view to the bottom.

.melt-enter-active { animation: 2s melt-enter; } .melt-leave-active { animation: 2s melt-leave; } @keyframes melt-enter { 0% { clip-path: path('M0 -0.12C8.33 -8.46 16.67 -12.62 25 -12.62C37.5 -12.62 35.91 0.15 50 -0.12C64.09 -0.4 62.5 -34.5 75 -34.5C87.5 -34.5 87.17 -4.45 100 -0.12C112.83 4.2 112.71 -17.95 125 -18.28C137.29 -18.62 137.76 1.54 150.48 -0.12C163.19 -1.79 162.16 -25.12 174.54 -25.12C182.79 -25.12 191.28 -16.79 200 -0.12L200 -34.37L0 -34.37L0 -0.12Z'); } 100% { clip-path: path('M0 199.88C8.33 270.71 16.67 306.13 25 306.13C37.5 306.13 35.91 231.4 50 231.13C64.09 230.85 62.5 284.25 75 284.25C87.5 284.25 87.17 208.05 100 212.38C112.83 216.7 112.71 300.8 125 300.47C137.29 300.13 137.76 239.04 150.48 237.38C163.19 235.71 162.16 293.63 174.54 293.63C182.79 293.63 191.28 262.38 200 199.88L200 0.13L0 0.13L0 199.88Z'); } } @keyframes melt-leave { 0% { clip-path: path('M0 0C8.33 -8.33 16.67 -12.5 25 -12.5C37.5 -12.5 36.57 -0.27 50 0C63.43 0.27 62.5 -34.37 75 -34.37C87.5 -34.37 87.5 -4.01 100 0C112.5 4.01 112.38 -18.34 125 -18.34C137.62 -18.34 138.09 1.66 150.48 0C162.86 -1.66 162.16 -25 174.54 -25C182.79 -25 191.28 -16.67 200 0L200 200L0 200L0 0Z'); } 100% { clip-path: path('M0 200C8.33 270.83 16.67 306.25 25 306.25C37.5 306.25 36.57 230.98 50 231.25C63.43 231.52 62.5 284.38 75 284.38C87.5 284.38 87.5 208.49 100 212.5C112.5 216.51 112.38 300.41 125 300.41C137.62 300.41 138.09 239.16 150.48 237.5C162.86 235.84 162.16 293.75 174.54 293.75C182.79 293.75 191.28 262.5 200 200L200 200L0 200L0 200Z'); } } Door

The door transition is similar to the iris transition we looked at first — it’s a "door" effect with shapes that move independently of each other. The path is made up of four shapes: two are half-circles located at the top and bottom while the other two split the left over positive space. This shows that, not only can each shape in the path animate separately from each other, they can also be completely different shapes.

In the leave transition, each shape moves away from the center out of view on its own side. The top half-circle moves upward leaving a hole behind and the bottom half-circle does the same. The left and right sides then slide away in a separate keyframe. Then the enter transition simply reverses the animation.

.door-enter-active { animation: 1s door reverse; } .door-leave-active { animation: 1s door; } @keyframes door { 0% { clip-path: path(' M0 0C16.03 0.05 32.7 0.05 50 0C50.05 27.36 74.37 50.01 100 50C99.96 89.53 100.08 136.71 100 150C70.48 149.9 50.24 175.5 50 200C31.56 199.95 14.89 199.95 0 200L0 0Z M200 0C183.46 -0.08 166.79 -0.08 150 0C149.95 21.45 133.25 49.82 100 50C100.04 89.53 99.92 136.71 100 150C130.29 150.29 149.95 175.69 150 200C167.94 199.7 184.6 199.7 200 200L200 0Z M100 50C130.83 49.81 149.67 24.31 150 0C127.86 0.07 66.69 0.07 50 0C50.26 23.17 69.36 49.81 100 50Z M100 150C130.83 150.19 149.67 175.69 150 200C127.86 199.93 66.69 199.93 50 200C50.26 176.83 69.36 150.19 100 150Z '); } 50% { clip-path: path(' M0 0C16.03 0.05 32.7 0.05 50 0C50.05 27.36 74.37 50.01 100 50C99.96 89.53 100.08 136.71 100 150C70.48 149.9 50.24 175.5 50 200C31.56 199.95 14.89 199.95 0 200L0 0Z M200 0C183.46 -0.08 166.79 -0.08 150 0C149.95 21.45 133.25 49.82 100 50C100.04 89.53 99.92 136.71 100 150C130.29 150.29 149.95 175.69 150 200C167.94 199.7 184.6 199.7 200 200L200 0Z M100 -6.25C130.83 -6.44 149.67 -31.94 150 -56.25C127.86 -56.18 66.69 -56.18 50 -56.25C50.26 -33.08 69.36 -6.44 100 -6.25Z M100 206.25C130.83 206.44 149.67 231.94 150 256.25C127.86 256.18 66.69 256.18 50 256.25C50.26 233.08 69.36 206.44 100 206.25Z '); } 100% { clip-path: path(' M-106.25 0C-90.22 0.05 -73.55 0.05 -56.25 0C-56.2 27.36 -31.88 50.01 -6.25 50C-6.29 89.53 -6.17 136.71 -6.25 150C-35.77 149.9 -56.01 175.5 -56.25 200C-74.69 199.95 -91.36 199.95 -106.25 200L-106.25 0Z M306.25 0C289.71 -0.08 273.04 -0.08 256.25 0C256.2 21.45 239.5 49.82 206.25 50C206.29 89.53 206.17 136.71 206.25 150C236.54 150.29 256.2 175.69 256.25 200C274.19 199.7 290.85 199.7 306.25 200L306.25 0Z M100 -6.25C130.83 -6.44 149.67 -31.94 150 -56.25C127.86 -56.18 66.69 -56.18 50 -56.25C50.26 -33.08 69.36 -6.44 100 -6.25Z M100 206.25C130.83 206.44 149.67 231.94 150 256.25C127.86 256.18 66.69 256.18 50 256.25C50.26 233.08 69.36 206.44 100 206.25Z '); } } X-Plus

This transition is different than most of the demos for this article. That’s because other demos show animating the "positive" space of the clip-path for transitions. It turns out that animating the "negative" space can be difficult with the traditional clip-path shapes. It can be done with the polygon shape but requires careful placement of vertices to create the negative space and animate them as necessary. This demo takes advantage of having two shapes in the path; there’s one shape that’s a huge square surrounding the space of the element and another shape in the center of this square. The center shape (in this case an x or +) excludes or "carves" out negative space in the outside shape. Then the center shape’s vertices are animated so that only the negative space is being animated.

The leave animation starts with the center shape as a tiny "x" that grows in size until the element is wiped from view. The enter animation the center shape is a "+" that is already larger than the element and shrinks down to nothing.

.x-plus-enter-active { animation: 1s x-plus-enter; } .x-plus-leave-active { animation: 1s x-plus-leave; } @keyframes x-plus-enter { 0% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM0.01 -0.02L-200 -0.02L-200 199.98L0.01 199.98L0.01 400L200.01 400L200.01 199.98L400 199.98L400 -0.02L200.01 -0.02L200.01 -200L0.01 -200L0.01 -0.02Z'); } 100% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM98.33 98.33L95 98.33L95 101.67L98.33 101.67L98.33 105L101.67 105L101.67 101.67L105 101.67L105 98.33L101.67 98.33L101.67 95L98.33 95L98.33 98.33Z'); } } @keyframes x-plus-leave { 0% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM96.79 95L95 96.79L98.2 100L95 103.2L96.79 105L100 101.79L103.2 105L105 103.2L101.79 100L105 96.79L103.2 95L100 98.2L96.79 95Z'); } 100% { clip-path: path('M-400 600L-400 -400L600 -400L600 600L-400 600ZM-92.31 -200L-200 -92.31L-7.69 100L-200 292.31L-92.31 400L100 207.69L292.31 400L400 292.31L207.69 100L400 -92.31L292.31 -200L100 -7.69L-92.31 -200Z'); } } Drops

The drops transition takes advantage of the ability to have multiple shapes in the same path. The path has ten circles placed strategically inside the area of the element. They start out as tiny and unseen, then are animated to a larger size over time. There are ten keyframes in the animation and each keyframe resizes a circle while maintaining the state of any previously resized circle. This gives the appearance of circles popping in or out of view one after the other during the animation.

The leave transition has the circles being shrunken out of view one at a time and the negative space grows to wipe out the element. The enter transition plays the animation in reverse so that the circles enlarge and the positive space grows to reveal the new element.

The CSS used for the drops transition is rather large, so take a look at the CSS section of the CodePen demo starting with the .drops-enter-active selector.

Numbers

This transition is similar to the x-plus transition above — it uses a negative shape for the animation inside a larger positive shape. In this demo, the animated shape changes through the numbers 1, 2, and 3 until the element is wiped away or revealed. The numeric shapes were created by manipulating the vertices of each number into the shape of the next number. So, each number shape has the same number of vertices and curves that animate correctly from one to the next.

The leave transition starts with the shape in the center but is made to be unseen. It then animates into the shape of the first number. The next keyframe animates to the next number and so no, then plays in reverse.

The CSS used for this is ginormous just like the last one, so take a look at the CSS section of the CodePen demo starting with the .numbers-enter-active selector.

Hopefully this article has given you a good idea of how clip-path can be used to create flexible and powerful animations that can be both straightforward and complex. Animations can add a nice touch to a design and even help provide context when switching from one state to another. At the same time, remember to be mindful of those who may prefer to limit the amount of animation or movement, for example, by setting reduced motion preferences.

The post Animating with Clip-Path appeared first on CSS-Tricks.

The Many Ways to Include CSS in JavaScript Applications

Css Tricks - Mon, 07/08/2019 - 10:45am

Welcome to an incredibly controversial topic in the land of front-end development! I’m sure that a majority of you reading this have encountered your fair share of #hotdrama surrounding how CSS should be handled within a JavaScript application.

I want to preface this post with a disclaimer: There is no hard and fast rule that establishes one method of handling CSS in a React, or Vue, or Angular application as superior. Every project is different, and every method has its merits! That may seem ambiguous, but what I do know is that the development community we exist in is full of people who are continuously seeking new information, and looking to push the web forward in meaningful ways.

Preconceived notions about this topic aside, let’s take a look at the fascinating world of CSS architecture!

Let us count the ways

Simply Googling "How to add CSS to [insert framework here]" yields a flurry of strongly held beliefs and opinions about how styles should be applied to a project. To try to help cut through some of the noise, let’s examine a few of the more commonly utilized methods at a high level, along with their purpose.

Option 1: A dang ol’ stylesheet

We’ll start off with what is a familiar approach: a dang ol’ stylesheet. We absolutely are able to <link> to an external stylesheet within our application and call it a day.

<link rel="stylesheet" href="styles.css">

We can write normal CSS that we’re used to and move on with our lives. Nothing wrong with that at all, but as an application gets larger, and more complex, it becomes harder and harder to maintain a single stylesheet. Parsing thousands of lines of CSS that are responsible for styling your entire application becomes a pain for any developer working on the project. The cascade is also a beautiful thing, but it also becomes tough to manage in the sense that some styles you — or other devs on the project — write will introduce regressions into other parts of the application. We’ve experienced these issues before, and things like Sass (and PostCSS more recently) have been introduced to help us handle these issues

We could continue down this path and utilize the awesome power of PostCSS to write very modular CSS partials that are strung together via @import rules. This requires a little bit of work within a webpack config to be properly set up, but something you can handle for sure!

No matter what compiler you decide to use (or not use) at the end of the day, you’ll be serving one CSS file that houses all of the styles for your application via a <link> tag in the header. Depending on the complexity of that application, this file has the potential to get pretty bloated, hard to load asynchronously, and render-blocking for the rest of your application. (Sure, render-blocking isn’t always a bad thing, but for all intents and purposes, we’ll generalize a bit here and avoid render blocking scripts and styles wherever we can.)

That’s not to say that this method doesn’t have its merits. For a small application, or an application built by a team with less of a focus on the front end, a single stylesheet may be a good call. It provides clear separation between business logic and application styles, and because it’s not generated by our application, is fully within our control to ensure exactly what we write is exactly what is output. Additionally, a single CSS file is fairly easy for the browser to cache, so that returning users don’t have to re-download the entire file on their next visit.

But let’s say that we’re looking for a bit more of a robust CSS architecture that allows us to leverage the power of tooling. Something to help us manage an application that requires a bit more of a nuanced approach. Enter CSS Modules.

Option 2: CSS Modules

One fairly large problem within a single stylesheet is the risk of regression. Writing CSS that utilizes a fairly non-specific selector could end up altering another component in a completely different area of your application. This is where an approach called "scoped styles" comes in handy.

Scoped styles allow us to programmatically generate class names specific to a component. Thus scoping those styles to that specific component, ensuring that their class names will be unique. This leads to auto-generated class names like header__2lexd. The bit at the end is a hashed selector that is unique, so even if you had another component named header, you could apply a header class to it, and our scoped styles would generate a new hashed suffix like so: header__15qy_.

CSS Modules offer ways, depending on implementation, to control the generated class name, but I’ll leave that up to the CSS Modules documentation to cover that.

Once all is said and done, we are still generating a single CSS file that is delivered to the browser via a <link> tag in the header. This comes with the same potential drawbacks (render blocking, file size bloat, etc.) and some of the benefits (caching, mostly) that we covered above. But this method, because of its purpose of scoping styles, comes with another caveat: the removal of the global scope — at least initially.

Imagine you want to employ the use of a .screen-reader-text global class that can be applied to any component within your application. If using CSS Modules, you’d have to reach for the :global pseudo selector that explicitly defines the CSS within it as something that is allowed to be globally accessed by other components in the app. As long as you import the stylesheet that includes your :global declaration block into your component’s stylesheet, you’ll have the use of that global selector. Not a huge drawback, but something that takes getting used to.

Here’s an example of the :global pseudo selector in action:

// typography.css :global { .aligncenter { text-align: center; } .alignright { text-align: right; } .alignleft { text-align: left; } }

You may run the risk of dropping a whole bunch of global selectors for typography, forms, and just general elements that most sites have into one single :global selector. Luckily, through the magic of things like PostCSS Nested or Sass, you can import partials directly into the selector to make things a bit more clean:

// main.scss :global { @import "typography"; @import "forms"; }

This way, you can write your partials without the :global selector, and just import them directly into your main stylesheet.

Another bit that takes some getting used to is how class names are referenced within DOM nodes. I’ll let the individual docs for Vue, React, and Angular speak for themselves there. I’ll also leave you with a little example of what those class references look like utilized within a React component:

// ./css/Button.css .btn { background-color: blanchedalmond; font-size: 1.4rem; padding: 1rem 2rem; text-transform: uppercase; transition: background-color ease 300ms, border-color ease 300ms; &:hover { background-color: #000; color: #fff; } } // ./Button.js import styles from "./css/Button.css"; const Button = () => ( <button className={styles.btn}> Click me! </button> ); export default Button;

The CSS Modules method, again, has some great use cases. For applications looking to take advantage of scoped styles while maintaining the performance benefits of a static, but compiled stylesheet, then CSS Modules may be the right fit for you!

It’s worth noting here as well that CSS Modules can be combined with your favorite flavor of CSS preprocessing. Sass, Less, PostCSS, etc. are all able to be integrated into the build process utilizing CSS Modules.

But let’s say your application could benefit from being included within your JavaScript. Perhaps gaining access to the various states of your components, and reacting based off of the changing state, would be beneficial as well. Let’s say you want to easily incorporate critical CSS into your application as well! Enter CSS-in-JS.

Option 3: CSS-in-JS

CSS-in-JS is a fairly broad topic. There are several packages that work to make writing CSS-in-JS as painless as possible. Frameworks like JSS, Emotion, and Styled Components are just a few of the many packages that comprise this topic.

As a broad strokes explanation for most of these frameworks, CSS-in-JS is largely operates the same way. You write CSS associated with your individual component and your build process compiles the application. When this happens, most CSS-in-JS frameworks will output the associated CSS of only the components that are rendered on the page at any given time. CSS-in-JS frameworks do this by outputting that CSS within a <style> tag in the <head> of your application. This gives you a critical CSS loading strategy out of the box! Additionally, much like CSS Modules, the styles are scoped, and the class names are hashed.

As you navigate around your application, the components that are unmounted will have their styles removed from the <head> and your incoming components that are mounted will have their styles appended in their place. This provides opportunity for performance benefits on your application. It removes an HTTP request, it is not render blocking, and it ensures that your users are only downloading what they need to view the page at any given time.

Another interesting opportunity CSS-in-JS provides is the ability to reference various component states and functions in order to render different CSS. This could be as simple as replicating a class toggle based on some state change, or be as complex as something like theming.

Because CSS-in-JS is a fairly #hotdrama topic, I realized that there are a lot of different ways that folks are trying to go about this. Now, I share the feelings of many other people who hold CSS in high regard, especially when it comes to leveraging JavaScript to write CSS. My initial reactions to this approach were fairly negative. I did not like the idea of cross-contaminating the two. But I wanted to keep an open mind. Let’s look at some of the features that we as front-end-focused developers would need in order to even consider this approach.

  • If we’re writing CSS-in-JS we have to write real CSS. Several packages offer ways to write template-literal CSS, but require you to camel-case your properties — i.e. padding-left becomes paddingLeft. That’s not something I’m personally willing to sacrifice.
  • Some CSS-in-JS solutions require you to write your styles inline on the element you’re attempting to style. The syntax for that, especially within complex components, starts to get very hectic, and again is not something I’m willing to sacrifice.
  • The use of CSS-in-JS has to provide me with powerful tools that are otherwise super difficult to accomplish with CSS Modules or a dang ol’ stylesheet.
  • We have to be able to leverage forward-thinking CSS like nesting and variables. We also have to be able to incorporate things like Autoprefixer, and other add-ons to enhance the developer experience.

It’s a lot to ask of a framework, but for those of us who have spent most of our lives studying and implementing solutions around a language that we love, we have to make sure that we’re able to continue writing that same language as best we can.

Here’s a quick peek at what a React component using Styled Components could look like:

// ./Button.js import styled from 'styled-components'; const StyledButton= styled.button` background-color: blanchedalmond; font-size: 1.4rem; padding: 1rem 2rem; text-transform: uppercase; transition: background-color ease 300ms, border-color ease 300ms; &:hover { background-color: #000; color: #fff; } `; const Button = () => ( <StyledButton> Click Me! </StyledButton> ); export default Button;

We also need to address the potential downsides of a CSS-in-JS solution — and definitely not as an attempt to spark more drama. With a method like this, it’s incredibly easy for us to fall into a trap that leads us to a bloated JavaScript file with potentially hundreds of lines of CSS — and that all comes before the developer will even see any of the component’s methods or its HTML structure. We can, however, look at this as an opportunity to very closely examine how and why we are building components the way they are. In thinking a bit more deeply about this, we can potentially use it to our advantage and write leaner code, with more reusable components.

Additionally, this method absolutely blurs the line between business logic and application styles. However, with a well-documented and well-thought architecture, other developers on the project can be eased into this idea without feeling overwhelmed.

TL;DR

There are several ways to handle the beast that is CSS architecture on any project and do so while using any framework. The fact that we, as developers, have so many choices is both super exciting, and incredibly overwhelming. However, the overarching theme that I think continues to get lost in super short social media conversations that we end up having, is that each solution has its own merits, and its own inefficiencies. It’s all about how we carefully and thoughtfully implement a system that makes our future selves, and/or other developers who may touch the code, thank us for taking the time to establish that structure.

The post The Many Ways to Include CSS in JavaScript Applications appeared first on CSS-Tricks.

A Little Reminder That Pseudo Elements are Children, Kinda.

Css Tricks - Mon, 07/08/2019 - 10:45am

Here's a container with some child elements:

<div class="container"> <div>item</div> <div>item</div> <div>item</div> </div>

If I do:

.container::before { content: "x" }

I'm essentially doing:

<div class="container"> [[[ ::before psuedo-element here ]]] <div>item</div> <div>item</div> <div>item</div> </div>

Which will behave just like a child element mostly. One tricky thing is that no selector selects it other than the one you used to create it (or a similar selector that is literally a ::before or ::after that ends up in the same place).

To illustrate, say I set up that container to be a 2x3 grid and make each item a kind of pillbox design:

.container { display: grid; grid-template-columns: 1fr 1fr; grid-gap: 0.5rem; } .container > * { background: darkgray; border-radius: 4px; padding: 0.5rem; }

Without the pseudo-element, that would be like this:

If I add that pseudo-element selector as above, I'd get this:

It makes sense, but it can also come as a surprise. Pseudo-elements are often decorative (they should pretty much only be decorative), so having it participate in a content grid just feels weird.

Notice that the .container > * selector didn't pick it up and make it darkgray because you can't select a pseudo-element that way. That's another minor gotcha.

In my day-to-day, I find pseudo-elements are typically absolutely-positioned to do something decorative — so, if you had:

.container::before { content: ""; position: absolute; /* Do something decorative */ }

...you probably wouldn't even notice. Technically, the pseudo-element is still a child, so it's still in there doing its thing, but isn't participating in the grid. This isn't unique to CSS Grid either. For instance, you'll find by using flexbox that your pseudo-element becomes a flex item. You're free to float your pseudo-element or do any other sort of layout with it as well.

DevTools makes it fairly clear that it is in the DOM like a child element:

There are a couple more gotchas!

One is :nth-child(). You'd think that if pseduo-elements are actually children, they would effect :nth-child() calculations, but they don't. That means doing something like this:

.container > :nth-child(2) { background: red; }

...is going to select the same element whether or not there is a ::before pseudo-element or not. The same is true for ::after and :nth-last-child and friends. That's why I put "kinda" in the title. If pseudo-elements were exactly like child elements, they would affect these selectors.

Another gotcha is that you can't select a pseudo-element in JavaScript like you could a regular child element. document.querySelector(".container::before"); is going to return null. If the reason you are trying to get your hands on the pseudo-element in JavaScript is to see its styles, you can do that with a little CSSOM magic:

const styles = window.getComputedStyle( document.querySelector('.container'), '::before' ); console.log(styles.content); // "x" console.log(styles.color); // rgb(255, 0, 0) console.log(styles.getPropertyValue('color'); // rgb(255, 0, 0)

Have you run into any gotchas with pseudo-elements?

The post A Little Reminder That Pseudo Elements are Children, Kinda. appeared first on CSS-Tricks.

Five Methods for Five-Star Ratings

Css Tricks - Fri, 07/05/2019 - 5:48am

In the world of likes and social statistics, reviews are very important method for leaving feedback. Users often like to know the opinions of others before deciding on items to purchase themselves, or even articles to read, movies to see, or restaurants to dine.

Developers often struggle with with reviews — it is common to see inaccessible and over-complicated implementations. Hey, CSS-Tricks has a snippet for one that’s now bordering on a decade.

Let’s walk through new, accessible and maintainable approaches for this classic design pattern. Our goal will be to define the requirements and then take a journey on the thought-process and considerations for how to implement them.

Scoping the work

Did you know that using stars as a rating dates all the way back to 1844 when they were first used to rate restaurants in Murray's Handbooks for Travellers — and later popularized by Michelin Guides in 1931 as a three-star system? There’s a lot of history there, so no wonder it’s something we’re used to seeing!

There are a couple of good reasons why they’ve stood the test of time:

  1. Clear visuals (in the form of five hollow or filled stars in a row)
  2. A straightforward label (that provides an accessible description, like aria-label)

When we implement it on the web, it is important that we focus meeting both of those outcomes.

It is also important to implement features like this in the most versatile way possible. That means we should reach for HTML and CSS as much as possible and try to avoid JavaScript where we can. And that’s because:

  1. JavaScript solutions will always differ per framework. Patterns that are typical in vanilla JavaScript might be anti-patterns in frameworks (e.g. React prohibits direct document manipulation).
  2. Languages like JavaScript evolve fast, which is great for community, but not so great articles like this. We want a solution that’s maintainable and relevant for the long haul, so we should base our decisions on consistent, stable tooling.
Methods for creating the visuals

One of the many wonderful things about CSS is that there are often many ways to write the same thing. Well, the same thing goes for how we can tackle drawing stars. There are five options that I see:

  • Using an image file
  • Using a background image
  • Using SVG to draw the shape
  • Using CSS to draw the shape
  • Using Unicode symbols

Which one to choose? It depends. Let's check them all out.

Method 1: Using an image file

Using images means creating elements — at least 5 of them to be exact. Even if we’re calling the same image file for each star in a five-star rating, that’s five total requests. What are the consequences of that?

  1. More DOM nodes make document structure more complex, which could cause a slower page paint. The elements themselves need to render as well, which means either the server response time (if SSR) or the main thread generation (if we’re working in a SPA) has to increase. That doesn’t even account for the rendering logic that has to be implemented.
  2. It does not handle fractional ratings, say 2.3 stars out of 5. That would require a second group of duplicated elements masked with clip-path on top of them. This increases the document’s complexity by a minimum of seven more DOM nodes, and potentially tens of additional CSS property declarations.
  3. Optimized performance ought to consider how images are loaded and implementing something like lazy-loading) for off-screen images becomes increasingly harder when repeated elements like this are added to the mix.
  4. It makes a request, which means that caching TTLs should be configured in order to achieve an instantaneous second image load. However, even if this is configured correctly, the first load will still suffer because TTFB awaits from the server. Prefetch, pre-connect techniques or the service-worker should be considered in order to optimize the first load of the image.
  5. It creates minimum of five non-meaningful elements for a screen reader. As we discussed earlier, the label is more important than the image itself. There is no reason to leave them in the DOM because they add no meaning to the rating — they are just a common visual.
  6. The images might be a part of manageable media, which means content managers will be able to change the star appearance at any time, even if it’s incorrect.
  7. It allows for a versatile appearance of the star, however the active state might only be similar to the initial state. It’s not possible to change the image src attribute without JavaScript and that’s something we’re trying to avoid.

Wondering how the HTML structure might look? Probably something like this:

<div class="Rating" aria-label="Rating of this item is 3 out of 5"> <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active"> <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active"> <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active"> <img src="/static/assets/star.png" class="Rating--Star"> <img src="/static/assets/star.png" class="Rating--Star"> </div>

In order to change the appearance of those stars, we can use multiple CSS properties. For example:

.Rating--Star { filter: grayscale(100%); // maybe we want stars to become grey if inactive opacity: .3; // maybe we want stars to become opaque }

An additional benefit of this method is that the <img> element is set to inline-block by default, so it takes a little bit less styling to position them in a single line.

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 2: Using a background image

This was once a fairly common implementation. That said, it still has its pros and cons.

For example:

  1. Sure, it’s only a single server request which alleviates a lot of caching needs. At the same time, we now have to wait for three additional events before displaying the stars: That would be (1) the CSS to download, (2) the CSSOM to parse, and (3) the image itself to download.
  2. It’s super easy to change the state of a star from empty to filled since all we’re really doing is changing the position of a background image. However, having to crack open an image editor and re-upload the file anytime a change is needed in the actual appearance of the stars is not the most ideal thing as far as maintenance goes.
  3. We can use CSS properties like background-repeat property and clip-path to reduce the number of DOM nodes. We could, in a sense, use a single element to make this work. On the other hand, it’s not great that we don’t technically have good accessible markup to identify the images to screen readers and have the stars be recognized as inputs. Well, not easily.

In my opinion, background images are probably best used complex star appearances where neither CSS not SVG suffice to get the exact styling down. Otherwise, using background images still presents a lot of compromises.

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 3: Using SVG to draw the shape

SVG is great! It has a lot of the same custom drawing benefits as raster images but doesn’t require a server call if it’s inlined because, well, it’s simply code!

We could inline five stars into HTML, but we can do better than that, right? Chris has shown us a nice approach that allows us to provide the SVG markup for a single shape as a <symbol> and call it multiple times with with <use>.

<!-- Draw the star as a symbol and remove it from view --> <svg xmlns="http://www.w3.org/2000/svg" style="display: none;"> <symbol id="star" viewBox="214.7 0 182.6 792"> <!-- <path>s and whatever other shapes in here --> </symbol> </svg> <!-- Then use anywhere and as many times as we want! --> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg> <svg class="icon"> <use xlink:href="#star" /> </svg>

What are the benefits? Well, we’re talking zero requests, cleaner HTML, no worries about pixelation, and accessible attributes right out of the box. Plus, we’ve got the flexibility to use the stars anywhere and the scale to use them as many times as we want with no additional penalties on performance. Score!

The ultimate benefit is that this doesn’t require additional overhead, either. For example, we don’t need a build process to make this happen and there’s no reliance on additional image editing software to make further changes down the road (though, let’s be honest, it does help).

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 4: Using CSS to draw the shape

This method is very similar to background-image method, though improves on it by optimizing drawing the shape with CSS properties rather than making a call for an image. We might think of CSS as styling elements with borders, fonts and other stuff, but it’s capable of producing ome pretty complex artwork as well. Just look at Diana Smith’s now-famous “Francine" portrait.

Francine, a CSS replica of an oil painting done in CSS by Diana Smith (Source)

We’re not going to get that crazy, but you can see where we’re going with this. In fact, there’s already a nice demo of a CSS star shape right here on CSS-Tricks.

See the Pen
Five stars!
by Geoff Graham (@geoffgraham)
on CodePen.

Or, hey, we can get a little more crafty by using the clip-path property to draw a five-point polygon. Even less CSS! But, buyer beware, because your cross-browser support mileage may vary.

See the Pen
5 Clipped Stars!
by Geoff Graham (@geoffgraham)
on CodePen.

Accessibility: ?????
Manangement: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Method 5: Using Unicode symbols

This method is very nice, but very limited in terms of appearance. Why? Because the appearance of the star is set in stone as a Unicode character. But, hey, there are variations for a filled star (?) and an empty star (?) which is exactly what we need!

Unicode characters are something you can either copy and paste directly into the HTML:

See the Pen
Unicode Stars!
by Geoff Graham (@geoffgraham)
on CodePen.

We can use font, color, width, height, and other properties to size and style things up a bit, but not a whole lot of flexibility here. But this is perhaps the most basic HTML approach of the bunch that it almost seems too obvious.

Instead, we can move the content into the CSS as a pseudo-element. That unleashes additional styling capabilities, including using custom properties to fill the stars fractionally:

See the Pen
Tiny but accessible 5 star rating
by Fred Genkin (@FredGenkin)
on CodePen.

Let’s break this last example down a bit more because it winds up taking the best benefits from other methods and splices them into a single solution with very little drawback while meeting all of our requirements.

Let's start with HTML. there’s a single element that makes no calls to the server while maintaining accessibility:

<div class="stars" style="--rating: 2.3;" aria-label="Rating of this product is 2.3 out of 5."></div>

As you may see, the rating value is passed as an inlined custom CSS property (--rating). This means there is no additional rendering logic required, except for displaying the same rating value in the label for better accessibility.

Let’s take a look at that custom property. It’s actually a conversion from a value value to a percentage that’s handled in the CSS using the calc() function:

--percent: calc(var(--rating) / 5 * 100%);

I chose to go this route because CSS properties — like width and linear-gradient — do not accept <number> values. They accept <length> and <percentage> instead and have specific units in them, like % and px, em. Initially, the rating value is a float, which is a <number> type. Using this conversion helps ensure we can use the values in a number of ways.

Filling the stars may sound tough, but turns out to be quite simple. We need a linear-gradient background to create hard color stops where the gold-colored fill should end:

background: linear-gradient(90deg, var(--star-background) var(--percent), var(--star-color) var(--percent) );

Note that I am using custom variables for colors because I want the styles to be easily adjustable. Because custom properties are inherited from the parent elements styles, you can define them once on the :root element and then override in an element wrapper. Here’s what I put in the root:

:root { --star-size: 60px; --star-color: #fff; --star-background: #fc0; }

The last thing I did was clip the background to the shape of the text so that the background gradient takes the shape of the stars. Think of the Unicode stars as stencils that we use to cut out the shape of stars from the background color. Or like a cookie cutters in the shape of stars that are mashed right into the dough:

-webkit-background-clip: text; -webkit-text-fill-color: transparent;

The browser support for background clipping and text fills is pretty darn good. IE11 is the only holdout.

Accessibility: ?????
Management: ?????
Performance: ?????
Maintenance: ?????
Overall: ????? Final thoughts Image Files Background Image SVG CSS Shapes Unicode Symbols Accessibility ????? ????? ????? ????? ????? Management ????? ????? ????? ????? ????? Performance ????? ????? ????? ????? ????? Maintenance ????? ????? ????? ????? ????? Overall ????? ????? ????? ????? ?????

Of the five methods we covered, two are my favorites: using SVG (Method 3) and using Unicode characters in pseudo-elements (Method 5). There are definitely use cases where a background image makes a lot of sense, but that seems best evaluated case-by-case as opposed to a go-to solution.

You have to always consider all the benefits and downsides of a specific method. This is, in my opinion, is the beauty of front-end development! There are multiple ways to go, and proper experience is required to implement features efficiently.

The post Five Methods for Five-Star Ratings appeared first on CSS-Tricks.

PSA: Linking to a Code of Conduct Template is Not the Same as Having a Code of Conduct

Css Tricks - Fri, 07/05/2019 - 5:19am

Did you know we have a site that lists all upcoming conferences related to front-end web design and development? We do! If you're looking to go to one, check it out. If you're running one, feel free to submit yours.

Now that we're running this, I've got loads of Pull Requests for conferences all around the world. I didn't realize that many (most?) conferences use the template at confcodeofconduct.com. In fact, many of them just link to it and call it a day.

That's why I'm very happy to see there is a new, bold warning about doing just that.

Important notice

This code of conduct page is a template and should not be considered as enforceable. If an event has linked to this page, please ask them to publish their own code of conduct including details on how to report issues and where to find support.

It's great that this site exists to give people some starter language for thinking about the idea of a code of conduct, but I can attest to the fact that many conferences used it as a way to appear to have a code of conduct before this warning while making zero effort to craft their own.

The primary concern about linking directly to someone else's code of conduct or copy and pasting it to a new page verbatim is that there is nothing about what to do in case of problems. So, should a conduct incident occur, there is no documented information for what people should do in that event. Without actionable follow-through, a code of conduct is close to meaningless. It's soul-less placating.

This is just one example:

It's not to single someone out. It's just one example of at least a dozen.

I heard from quite a few people about this, and I agree that it's potentially a serious issue. I've tried to be clear about it: I won't merge a Pull Request if the conference is missing a code of conduct or it simply links to confcodeofconduct.com (or uses a direct copy of it with no actionable details).

I know the repo is looking for help translating the new warning into different languages. If you can help with that, I'm sure they'd love a PR to the appropriate index HTML file.

The post PSA: Linking to a Code of Conduct Template is Not the Same as Having a Code of Conduct appeared first on CSS-Tricks.

The Twelfth Fourth

Css Tricks - Thu, 07/04/2019 - 9:09am

CSS-Tricks is 12 years old! Firmly into that Early Adolescence stage, I'd say ;) As we do each year, let's reflect upon the past year. I'd better have something to say, right? Otherwise, John Prine would get mad at me.

How the hell can a person go to work in the morning
And come home in the evening and have nothing to say.
- Angel From Montgomery

See the Pen
Fireworks!
by Tim Severien (@timseverien)
on CodePen.

Easily the biggest change this year was design v17

We redesign most years, so it's not terribly shocking I suppose that we did this year, but I think it's fairly apparent that this was a big one. The biggest since v10.

Here's a writeup on v17.

I still get happy emails about it.

The aesthetics of it still feel fresh to me, 6 months later. There are no plans at all yet for what the next version will be. I imagine this one will last a good couple of years with tweaks along the way. I'm always working on it. Just in the last few days, I have several commits cleaning things up, adding little features, and optimizing. That work is never done. v18 might just be a more thorough scrubbing of what is here. Might be a good release to focus on the back-end tech. I've always wanted to try some sort of MVC setup.

In a way, things feel easier.

There is a lot going right around here. We've got a great staff. Our editorial workflow, led by Geoff, has been smooth. There are ebbs and flows of how many great guest posts are in the pipeline, but it never seems to run dry and these days we stay more ahead than we ever have.

We stay quite organized with Notion. In fact, I still use it heavily across all the teams I'm on. It's just as fundamental as Slack and email.

We're still working with BuySellAds as a partner to help us sell advertising and sponsorship partnerships. We've worked with them for ages and they really do a good job with clean ad tech, smooth integration workflows, and finding good companies that want to run campaigns.

On the 10th anniversary I wrote:

If you do all the work, the hope is that you just keep to keep on keeping on. Everyone gets paid for their effort. This is not a hockey-stick growth kind of site. It's a modest publication.

Yep.

Check out a year over year chart from Google Analytics:

I can look at that and celebrate the moments with growth. Long periods of 20% year over year growth, which is cool. Then if you look at just this last month, we're more even or a little bit under 2018 (looking at only pageviews). Good to know, kinda, but I never put much stock in this kind of generic analytics. I'm glad we record them. I would want to know if we started tanking or growing hugely. But we never do. We have long slow steady growth and that is a comfortable place for me.

Thinking on ads

The world of advertising is tightly integrated around here, of course. I'm sure many of you look at this site and go JEEZ, LITTLE HEAVY ON THE ADS, EH? I hope it's not too big a turnoff, as I really try to be tasteful with them. But another thing you should know is that the ad tech is clean. No tracking stuff. No retargetting. No mysterious third-party JavaScript. There might be an impression-tracking pixel here and there, but that's about it. No slew of 100's of requests doing god-knows-what.

That's not by accident. It's clear to me now how to go down that other road, and that road has money on it. Twice as much. But I look at it as what would be short term gains. Nobody is going to be more mad at me than you if I slap 80 tracking scripts on this site, my credibility amongst devs goes out the window along with any hopes of sustaining or growing this site. It's no surprise to me that on sites without developers as an audience, the tendency is to go down the golden road of tracking scripts.

Even the tech is easier.

Just starting in July I've gotten all my sites on Flywheel hosting, and I've written about that here just today. Flywheel is a new sponsor here to the site, and I'm equally excited about that as I am in actually using it. Between using Local for local WordPress development, GitHub for repos, Buddy for deployment, Cloudflare for DNS/CDN... everything just feels smooth and easy right now.

The way I feel about tech at the moment is that nearly anything is doable. Even stuff that feels insurmountable. It's just work. Do a bunch of work, get the thing done.

Fancy posts

One thing that we snuck in this year is the idea of posts that have special design applied to them. The term "Art-directed articles" seems to be the term that has stuck for that, for better or worse, and we've added to that.

There are posts like The Great Divide that I intentionally wanted to make stand out.

And now we've taken that and turned it into a template. The point of an art-directed article is to do something unique, so a template is a little antithetical to that, but I think this strikes a nice middle ground. The template assumes a big full-width header with background image under big title and then is otherwise just a centered column of type on white. The point is to use the template, then apply custom styles on top of it as needed to do something special for the post. I have a good feeling we'll keep using it and have fun with it, and that it won't be too burdensome for future designs.

Elsewhere

Last year at this time I was just settling into living in Bend, Oregon. It still feels that way. I'm in a new house now, that we've bought, and it feels like this is a very permanent living situation. But we're less than a year into the new house so there is plenty of dust to settle. I'm still honeymooning on Bend as I just love it here so much. My daughter is just over a year and a half now so stability is very much what we're after.

Professionally, most of my time is on CodePen, of course. There is a lot of overlap, like the fact that we work with BuySellAds on both sites and often sell across both. Plus working on CSS-Tricks always has me in CodePen anyway ;). Miraculously, Dave Rupert and I almost never miss a week on ShopTalk Show. Going strong after all these years. Never a shortage of stuff to talk about when it comes to websites.

Thank you

A big hearty thanks from me! Y'all reading this site is what makes it possible.

The post The Twelfth Fourth appeared first on CSS-Tricks.

CSS-Tricks on Flywheel

Css Tricks - Thu, 07/04/2019 - 5:50am

I first heard of Flywheel through their product Local, which is a native app for working on WordPress sites. If you ask around for what people use for that kind of work, you'll get all sorts of answers, but an awful lot of very strong recommendations for Local. I've become one of them! We ultimately did a sponsored post for Local, but that's based on the fact that now 100% of my local WordPress development work is done using it and I'm very happy with it.

Now I've taken the next step and moved all my production sites to Flywheel hosting!

Full disclosure here, Flywheel is now a sponsor of CSS-Tricks. I've been wanting to work with them for a while. I've been out to visit them in Omaha! (&#x1f44b; at Jamie, Christi, Karissa, and everybody I've worked with over there.) Part of our deal includes the hosting. But I was a paying customer and user of Flywheel before this on some sites, and my good experiences there are what made me want to get this sponsorship partnership cooking! There has been big recent news that Flywheel was acquired by WP Engine. I'm also a fan of WP Engine, being also a premium WordPress host that has done innovative things with hosting, so I'm optimistic that a real WordPress hosting powerhouse is being formed and I've got my sites in the right place.

Developing on Local is a breeze

It feels like a breath of fresh air to me, as running all the dev dependencies for WordPress has forever been a bit of a pain in the butt. Sometimes you have it going fine, but then something breaks in the most inscrutable possible way and it takes forever to get going again. Whatever, you know what I mean. At this point, I've been running Local for over a year and have had almost no issues with it.

There are all kinds of features worth checking out here. Here's one that is very likely useful to bigger teams. Say you have a Flywheel account with a bunch of production sites on it. Then a new person starts working with you and they have their own computer. You connect Local to Flywheel, and you can pull down the site and have it ready to work on. That's pretty sweet.

Local doesn't lock you into anything either. You can use Local for local development and literally use nothing else. Local can push a site up to Flywheel hosting too, which I've found to be mighty useful particularly for that first deployment of a new site, but you don't have to use that if you don't want. I'll cover more about workflow below.

Other features that I find worthy of note:

  • Spinning up a new site takes just a second. A quick walkthrough through a wizard where they ask you some login details but otherwise offer smart-but-customizable defaults.
  • Dealing with HTTPS locally is easy. It will create a certificate for you and trust it locally with one click.
  • You can flip on "Live Link", which uses ngrok to create a live, sharable URL to your localhost site. Great for temporarily showing a client or co-worker something without having to move anything.
  • One click to pop open the database in Sequel Pro, my favorite free database tool. Much easier than trying to spin up phpMyAdmin or whatever on the web to manage from there.
Flywheel's Dashboard is so clear

I love the simple UI of Local, and I really like how that same design and spirit carries over into the Flywheel hosting dashboard.

There are so many things the dashboard makes easy:

  • You need an SSL cert? Click some buttons.
  • Wanna force HTTPS? Flip a switch.
  • Wanna convert the site to Multisite? Hit a button.
  • Need to edit the database? There is a UI around it built in.
  • Want a CDN? Toggle a thing.
  • Need to invite a collaborator on a site? Go for it.
  • Need a backup? There are in there, download it or restore to that point.

It's a big deal when everything is simple and works. It means you aren't burning hours fighting with tools and you can use them doing work that pushes you forward.

Workflow

When I set up my new CSS-Tricks workflow, I had Flywheel move the site for me (thanks gang!) (no special treatment either, they'll do that for anybody).

I've got Local already, so my local development process is the same. But I needed to update my deployment workflow for the new hosting. Local can push a site up to Flywheel hosting, but it just zips everything up and sends it all up. Great for first deployment but not perfect for tiny little changes like 95% of the work I do. There is a new Local for Teams feature, which uses what they call MagicSync for deployment, which only deploys changed files. That's very cool, but I like working with a Git-based system, where ultimately merges to master are what trigger deployment of the changed files.

For years I've used Beanstalk for Git-based deployment over SFTP. I still am using Beanstalk for many sites and think it's a great choice, but Beanstalk has the limitation that the Git-repo is basically a private Git repo hosted by Beanstalk itself.

During this change, I needed to switch up what the root of the repo is (more on that in a second) so I wanted to create a new repo. I figured rather than doing that on Beanstalk, I'd make a private GitHub repo and set up deployment from there. There are services like DeployHQ and DeployBot that will work well for that, but I went with Buddy, which has a really really nice UI for managing all this stuff, and is capable of much more than just deployment should I ultimately need that.

Regarding the repo itself, one thing that I've always done with my WordPress sites is just make the repo the whole damn thing starting at the root. I think it's just a legacy/comfort thing. I had some files at the root I wanted to deploy along with everything else and that seemed like the easiest way. In WordPress-land, this isn't usually how it's done. It's more common to have the /wp-content/ folder be the root of the repo, as those are essentially the only files unique to your installation. I can imagine setups where even down to individual themes are repos and deployed alone.

I figured I'd get on board with a more scoped deployment, but also, I didn't have much of a choice. Flywheel literally locks down all WordPress core files, so if your deployment system tries to override them, it will just fail. That actually sounds great to me. There is no reason anyone from the outside should alter those files, might as well totally remove it as an attack vector. Flywheel itself keeps the WordPress version up to date. So I made a new repo with /wp-content/ at the root, and I figured I'd make it on GitHub instead just because that's such an obvious hub of developer activity and keeps my options wide open for deployment choices.

Maybe I'll open source it all one day when I've had a chance to comb through it.

For the same kind of spiritual reasons, during the the move, I moved the DNS over to Cloudflare. This gives me control over DNS from a third-party so it's easy for me to point things where I need them. Kind of a decentralization of concerns. That's not for everyone, but it's great for me on this project. While now I might suffer from Cloudflare outages (rare, but it literally just happened), I benefit from all sorts of additional security and performance that Cloudflare can provide.

So the workflow is Local > GitHub > Buddy > Flywheel.

And the hosting is Cloudflare > Flywheel with image assets on Cloudinary.

And I've got backups from both Flywheel and Jetpack/VaultPress.

The post CSS-Tricks on Flywheel appeared first on CSS-Tricks.

Menus with “Dynamic Hit Areas”

Css Tricks - Wed, 07/03/2019 - 10:48am

Flyout menus! The second you need to implement a menu that uses a hover event to display more menu items, you're in tricky territory. For one, they should work with clicks and taps, too. Without that, you've broken the menu for anyone without a mouse. That doesn't mean you can't also use :hover. When you use a hover state to reveal more content, that means an un-hovering state needs to hide them. Therein lies the problem.

The problem is that if a submenu pops out somewhere on hover, getting your mouse over to it might involve moving it along a fairly narrow corridor. Accidentally move outside that area, the menu can close, and it can be an extremely frustrating UX moment.

We've covered this before in our "Dropdown Menus with More Forgiving Mouse Movement Paths" article.

You can get to the menu item you want, but there are some narrow passages along the way. Many dropdowns are designed such that the submenu where the desired menu item is may close on you when the right area isn't in :hover, or a mouseleave or a mouseout occurs.

The most compelling examples that solve this issue are the ones that involve extra hidden "hit areas." Amazon doesn't really have menus like this anymore (that I can see), and perhaps this is one of the reasons why. But in the past, they've used this hit area technique. We could call them "dynamic hit areas" because they were drawn based on the position of the parent element and the submenus:

I haven't seen a lot of implementations of this lately, but just recently, Hakim El Hattab included a modern implementation of this in his talk at CSS Day 2019. The implementation leverages drawing the areas dynamically with SVG. You don't actually see the hit areas, but they do look like this, thus forming paths for that prevent hover-offs.

I'll include a YouTube embed of the talk starting at that point here:

The way he draws the hit area is so fancy it makes me all kinds of happy:

The live demo of it is up on the Slides.com pattern library thingy.

The post Menus with “Dynamic Hit Areas” appeared first on CSS-Tricks.

Hey, let’s create a functional calendar app with the JAMstack

Css Tricks - Wed, 07/03/2019 - 4:37am

Hey, let's create a functional calendar app with the JAMstack

I’ve always wondered how dynamic scheduling worked so I decided to do extensive research, learn new things, and write about the technical part of the journey. It’s only fair to warn you: everything I cover here is three weeks of research condensed into a single article. Even though it’s beginner-friendly, it’s a healthy amount of reading. So, please, pull up a chair, sit down and let’s have an adventure.

My plan was to build something that looked like Google Calendar but only demonstrate three core features:

  1. List all existing events on a calendar
  2. Create new events
  3. Schedule and email notification based on the date chosen during creation. The schedule should run some code to email the user when the time is right.

Pretty, right? Make it to the end of the article, because this is what we’ll make.

The only knowledge I had about asking my code to run at a later or deferred time was CRON jobs. The easiest way to use a CRON job is to statically define a job in your code. This is ad hoc — statically means that I cannot simply schedule an event like Google Calendar and easily have it update my CRON code. If you are experienced with writing CRON triggers, you feel my pain. If you’re not, you are lucky you might never have to use CRON this way.

To elaborate more on my frustration, I needed to trigger a schedule based on a payload of HTTP requests. The dates and information about this schedule would be passed in through the HTTP request. This means there’s no way to know things like the scheduled date beforehand.

We (my colleagues and I) figured out a way to make this work and — with the help of Sarah Drasner’s article on Durable Functions — I understood what I needed learn (and unlearn for that matter). You will learn about everything I worked in this article, from event creation to email scheduling to calendar listings. Here is a video of the app in action:

You might notice the subtle delay. This has nothing to do with the execution timing of the schedule or running the code. I am testing with a free SendGrid account which I suspect have some form of latency. You can confirm this by testing the serverless function responsible without sending emails. You would notice that the code runs at exactly the scheduled time.

Tools and architecture

Here are the three fundamental units of this project:

  1. React Frontend: Calendar UI, including the UI to create, update or delete events.
  2. 8Base GraphQL: A back-end database layer for the app. This is where we will store, read and update our date from. The fun part is you won’t write any code for this back end.
  3. Durable Functions: Durable functions are kind of Serverless Functions that have the power of remembering their state from previous executions. This is what replaces CRON jobs and solves the ad hoc problem we described earlier.

See the Pen
durable-func1
by Chris Nwamba (@codebeast)
on CodePen.

The rest of this post will have three major sections based on the three units we saw above. We will take them one after the other, build them out, test them, and even deploy the work. Before we get on with that, let’s setup using a starter project I made to get us started.

Project Repo

Getting Started

You can set up this project in different ways — either as a full-stack project with the three units in one project or as a standalone project with each unit living in it's own root. Well, I went with the first because it’s more concise, easier to teach, and manageable since it’s one project.

The app will be a create-react-app project and I made a starter for us to lower the barrier to set up. It comes with supplementary code and logic that we don’t need to explain since they are out of the scope of the article. The following are set up for us:

  1. Calendar component
  2. Modal and popover components for presenting event forms
  3. Event form component
  4. Some GraphQL logic to query and mutate data
  5. A Durable Serverless Function scaffold where we will write the schedulers

Tip: Each existing file that we care about has a comment block at the top of the document. The comment block tells you what is currently happening in the code file and a to-do section that describes what we are required to do next.

Start by cloning the starter form Github:

git clone -b starter --single-branch https://github.com/christiannwamba/calendar-app.git

Install the npm dependencies described in the root package.json file as well as the serverless package.json:

npm install Orchestrated Durable Functions for scheduling

There are two words we need to get out of the way first before we can understand what this term is — orchestration and durable.

Orchestration was originally used to describe an assembly of well-coordinated events, actions, etc. It is heavily borrowed in computing to describe a smooth coordination of computer systems. The key word is coordinate. We need to put two or more units of a system together in a coordinated way.

Durable is used to describe anything that has the outstanding feature of lasting longer.

Put system coordination and long lasting together, and you get Durable Functions. This is the most powerful feature if Azure’s Serverless Function. Durable Functions based in what we now know have these two features:

  1. They can be used to assemble the execution of two or more functions and coordinate them so race conditions do not occur (orchestration).
  2. Durable Functions remember things. This is what makes it so powerful. It breaks the number one rule of HTTP: stateless. Durable functions keep their state intact no matter how long they have to wait. Create a schedule for 1,000,000 years into the future and a durable function will execute after one million years while remembering the parameters that were passed to it on the day of trigger. That means Durable Functions are stateful.

These durability features unlock a new realm of opportunities for serverless functions and that is why we are exploring one of those features today. I highly recommend Sarah’s article one more time for a visualized version of some of the possible use cases of Durable Functions.

I also made a visual representation of the behavior of the Durable Functions we will be writing today. Take this as an animated architectural diagram:

A data mutation from an external system (8Base) triggers the orchestration by calling the HTTP Trigger. The trigger then calls the orchestration function which schedules an event. When the time for execution is due, the orchestration function is called again but this time skips the orchestration and calls the activity function. The activity function is the action performer. This is the actual thing that happens e.g. "send email notification".

Create orchestrated Durable Functions

Let me walk you through creating functions using VS Code. You need two things:

  1. An Azure account
  2. VS Code

Once you have both setup, you need to tie them together. You can do this using a VS Code extension and a Node CLI tool. Start with installing the CLItool:

npm install -g azure-functions-core-tools # OR brew tap azure/functions brew install azure-functions-core-tools

Next, install the Azure Function extension to have VS Code tied to Functions on Azure. You can read more about setting up Azure Functions from my previous article.

Now that you have all the setup done, let’s get into creating these functions. The functions we will be creating will map to the following folders.

Folder Function schedule Durable HTTP Trigger scheduleOrchestrator Durable Orchestration sendEmail Durable Activity

Start with the trigger.

  1. Click on the Azure extension icon and follow the image below to create the schedule function
  2. Since this is the first function, we chose the folder icon to create a function project. The icon after that creates a single function (not a project).
  3. Click Browse and create a serverless folder inside the project. Select the new serverless folder.
  4. Select JavaScript as the language. If TypeScript (or any other language) is your jam, please feel free.
  5. Select Durable Functions HTTP starter. This is the trigger.
  6. Name the first function as schedule

Next, create the orchestrator. Instead of creating a function project, create a function instead.

  1. Click on the function icon:
  2. Select Durable Functions orchestrator.
  3. Give it a name, scheduleOrchestrator and hit Enter.
  4. You will be asked to select a storage account. Orchestrator uses storage to preserve the state of a function-in-process.
  5. Select a subscription in your Azure account. In my case, I chose the free trial subscription.
  6. Follow the few remaining steps to create a storage account.

Finally, repeat the previous step to create an Activity. This time, the following should be different:

  • Select Durable Functions activity.
  • Name it sendEmail.
  • No storage account will be needed.
Scheduling with a durable HTTP trigger

The code in serverless/schedule/index.js does not need to be touched. This is what it looks like originally when the function is scaffolded using VS Code or the CLI tool.

const df = require("durable-functions"); module.exports = async function (context, req) { const client = df.getClient(context); const instanceId = await client.startNew(req.params.functionName, undefined, req.body); context.log(`Started orchestration with ID = '${instanceId}'.`); return client.createCheckStatusResponse(context.bindingData.req, instanceId); };

What is happening here?

  1. We’re creating a durable function on the client side that is based on the context of the request.
  2. We’re calling the orchestrator using the client's startNew() function. The orchestrator function name is passed as the first argument to startNew() via the params object. A req.body is also passed to startNew() as third argument which is forwarded to the orchestrator.
  3. Finally, we return a set of data that can be used to check the status of the orchestrator function, or even cancel the process before it's complete.

The URL to call the above function would look like this:

http://localhost:7071/api/orchestrators/{functionName}

Where functionName is the name passed to startNew. In our case, it should be:

//localhost:7071/api/orchestrators/scheduleOrchestrator

It’s also good to know that you can change how this URL looks.

Orchestrating with a Durable Orchestrator

The HTTP trigger startNew call calls a function based on the name we pass to it. That name corresponds to the name of the function and folder that holds the orchestration logic. The serverless/scheduleOrchestrator/index.js file exports a Durable Function. Replace the content with the following:

const df = require("durable-functions"); module.exports = df.orchestrator(function* (context) { const input = context.df.getInput() // TODO -- 1 // TODO -- 2 });

The orchestrator function retrieves the request body from the HTTP trigger using context.df.getInput().

Replace TODO -- 1 with the following line of code which might happen to be the most significant thing in this entire demo:

yield context.df.createTimer(new Date(input.startAt))

What this line does use the Durable Function to create a timer based on the date passed in from the request body via the HTTP trigger.

When this function executes and gets here, it will trigger the timer and bail temporarily. When the schedule is due, it will come back, skip this line and call the following line which you should use in place of TODO -- 2.

return yield context.df.callActivity('sendEmail', input);

The function would call the activity function to send an email. We are also passing a payload as the second argument.

This is what the completed function would look like:

const df = require("durable-functions"); module.exports = df.orchestrator(function* (context) { const input = context.df.getInput() yield context.df.createTimer(new Date(input.startAt)) return yield context.df.callActivity('sendEmail', input); }); Sending email with a durable activity

When a schedule is due, the orchestrator comes back to call the activity. The activity file lives in serverless/sendEmail/index.js. Replace what’s in there with the following:

const sgMail = require('@sendgrid/mail'); sgMail.setApiKey(process.env['SENDGRID_API_KEY']); module.exports = async function(context) { // TODO -- 1 const msg = {} // TODO -- 2 return msg; };

It currently imports SendGrid’s mailer and sets the API key. You can get an API Key by following these instructions.

I am setting the key in an environmental variable to keep my credentials safe. You can safely store yours the same way by creating a SENDGRID_API_KEY key in serverless/local.settings.json with your SendGrid key as the value:

{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "<<AzureWebJobsStorage>", "FUNCTIONS_WORKER_RUNTIME": "node", "SENDGRID_API_KEY": "<<SENDGRID_API_KEY>" } }

Replace TODO -- 1 with the following line:

const { email, title, startAt, description } = context.bindings.payload;

This pulls out the event information from the input from the orchestrator function. The input is attached to context.bindings. payload can be anything you name it so go to serverless/sendEmail/function.json and change the name value to payload:

{ "bindings": [ { "name": "payload", "type": "activityTrigger", "direction": "in" } ] }

Next, update TODO -- 2 with the following block to send an email:

const msg = { to: email, from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' }, subject: `Event: ${title}`, html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>` }; sgMail.send(msg); return msg;

Here is the complete version:

const sgMail = require('@sendgrid/mail'); sgMail.setApiKey(process.env['SENDGRID_API_KEY']); module.exports = async function(context) { const { email, title, startAt, description } = context.bindings.payload; const msg = { to: email, from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' }, subject: `Event: ${title}`, html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>` }; sgMail.send(msg); return msg; }; Deploying functions to Azure

Deploying functions to Azure is easy. It’s merely a click away from the VS Code editor. Click on the circled icon to deploy and get a deploy URL:

Still with me this far in? You’re making great progress! It’s totally OK to take a break here, nap, stretch or get some rest. I definitely did while writing this post.

Data and GraphQL layer with 8Base

My easiest description and understanding of 8Base is "Firebase for GraphQL." 8Base is a database layer for any kind of app you can think of and the most interesting aspect of it is that it’s based on GraphQL.

The best way to describe where 8Base fits in your stack is to paint a picture of a scenario.

Imagine you are a freelance developer with a small-to-medium scale contract to build an e-commerce store for a client. Your core skills are on the web so you are not very comfortable on the back end. though you can write a bit of Node.

Unfortunately, e-commerce requires managing inventories, order management, managing purchases, managing authentication and identity, etc. "Manage" at a fundamental level just means data CRUD and data access.

Instead of the redundant and boring process of creating, reading, updating, deleting, and managing access for entities in our backend code, what if we could describe these business requirements in a UI? What if we can create tables that allow us to configure CRUD operations, auth and access? What if we had such help and only focus on building frontend code and writing queries? Everything we just described is tackled by 8Base

Here is an architecture of a back-end-less app that relies on 8Base as it’s data layer:

Create an 8Base table for events storage and retrieval

The first thing we need to do before creating a table is to create an account. Once you have an account, create a workspace that holds all the tables and logic for a given project.

Next, create a table, name the table Events and fill out the table fields.

We need to configure access levels. Right now, there’s nothing to hide from each user, so we can just turn on all access to the Events table we created:

Setting up Auth is super simple with 8base because it integrates with Auth0. If you have entities that need to be protected or want to extend our example to use auth, please go wild.

Finally, grab your endpoint URL for use in the React app:

Testing GraphQL Queries and mutation in the playground

Just to be sure that we are ready to take the URL to the wild and start building the client, let’s first test the API with a GraphQL playground and see if the setup is fine. Click on the explorer.

Paste the following query in the editor.

query { eventsList { count items { id title startAt endAt description allDay email } } }

I created some test data through the 8base UI and I get the result back when I run they query:

You can explore the entire database using the schema document on the right end of the explore page.

Calendar and event form interface

The third (and last) unit of our project is the React App which builds the user interfaces. There are four major components making up the UI and they include:

  1. Calendar: A calendar UI that lists all the existing events
  2. Event Modal: A React modal that renders EventForm component to create a component
  3. Event Popover: Popover UI to read a single event, update event using EventForm or delete event
  4. Event Form: HTML form for creating new event

Before we dive right into the calendar component, we need to setup React Apollo client. The React Apollo provider empowers you with tools to query a GraphQL data source using React patterns. The original provider allows you to use higher order components or render props to query and mutate data. We will be using a wrapper to the original provider that allows you query and mutate using React Hooks.

In src/index.js, import the React Apollo Hooks and the 8base client in TODO -- 1:

import { ApolloProvider } from 'react-apollo-hooks'; import { EightBaseApolloClient } from '@8base/apollo-client';

At TODO -- 2, configure the client with the endpoint URL we got in the 8base setup stage:

const URI = 'https://api.8base.com/cjvuk51i0000701s0hvvcbnxg'; const apolloClient = new EightBaseApolloClient({ uri: URI, withAuth: false });

Use this client to wrap the entire App tree with the provider on TODO -- 3:

ReactDOM.render( <ApolloProvider client={apolloClient}> <App /> </ApolloProvider>, document.getElementById('root') ); Showing events on the calendar

The Calendar component is rendered inside the App component and the imports BigCalendar component from npm. Then :

  1. We render Calendar with a list of events.
  2. We give Calendar a custom popover (EventPopover) component that will be used to edit events.
  3. We render a modal (EventModal) that will be used to create new events.

The only thing we need to update is the list of events. Instead of using the static array of events, we want to query 8base for all store events.

Replace TODO -- 1 with the following line:

const { data, error, loading } = useQuery(EVENTS_QUERY);

Import the useQuery library from npm and the EVENTS_QUERY at the beginning of the file:

import { useQuery } from 'react-apollo-hooks'; import { EVENTS_QUERY } from '../../queries';

EVENTS_QUERY is exactly the same query we tested in 8base explorer. It lives in src/queries and looks like this:

export const EVENTS_QUERY = gql` query { eventsList { count items { id ... } } } `;

Let’s add a simple error and loading handler on TODO -- 2:

if (error) return console.log(error); if (loading) return ( <div className="calendar"> <p>Loading...</p> </div> );

Notice that the Calendar component uses the EventPopover component to render a custom event. You can also observe that the Calendar component file renders EventModal as well. Both components have been setup for you, and their only responsibility is to render EventForm.

Create, update and delete events with the event form component

The component in src/components/Event/EventForm.js renders a form. The form is used to create, edit or delete an event. At TODO -- 1, import useCreateUpdateMutation and useDeleteMutation:

import {useCreateUpdateMutation, useDeleteMutation} from './eventMutationHooks'
  • useCreateUpdateMutation: This mutation either creates or updates an event depending on whether the event already existed.
  • useDeleteMutation: This mutation deletes an existing event.

A call to any of these functions returns another function. The function returned can then serve as an even handler.

Now, go ahead and replace TODO -- 2 with a call to both functions:

const createUpdateEvent = useCreateUpdateMutation( payload, event, eventExists, () => closeModal() ); const deleteEvent = useDeleteMutation(event, () => closeModal());

These are custom hooks I wrote to wrap the useMutation exposed by React Apollo Hooks. Each hook creates a mutation and passes mutation variable to the useMutation query. The blocks that look like the following in src/components/Event/eventMutationHooks.js are the most important parts:

useMutation(mutationType, { variables: { data }, update: (cache, { data }) => { const { eventsList } = cache.readQuery({ query: EVENTS_QUERY }); cache.writeQuery({ query: EVENTS_QUERY, data: { eventsList: transformCacheUpdateData(eventsList, data) } }); //.. } }); Call the Durable Function HTTP trigger from 8Base

We have spent quite some time in building the serverless structure, data storage and UI layers of our calendar app. To recap, the UI sends data to 8base for storage, 8base saves data and triggers the Durable Function HTTP trigger, the HTTP trigger kicks in orchestration and the rest is history. Currently, we are saving data with mutation but we are not calling the serverless function anywhere in 8base.

8base allows you to write custom logic which is what makes it very powerful and extensible. Custom logic are simple functions that are called based on actions performed on the 8base database. For example, we can set up a logic to be called every time a mutation occurs on a table. Let’s create one that is called when an event is created.

Start by installing the 8base CLI:

npm install -g 8base

On the calendar app project run the following command to create a starter logic:

8base init 8base

8base init command creates a new 8base logic project. You can pass it a directory name which in this case we are naming the 8base logic folder, 8base — don’t get it twisted.

Trigger scheduling logic

Delete everything in 8base/src and create a triggerSchedule.js file in the src folder. Once you have done that, drop in the following into the file:

const fetch = require('node-fetch'); module.exports = async event => { const res = await fetch('<HTTP Trigger URL>', { method: 'POST', body: JSON.stringify(event.data), headers: { 'Content-Type': 'application/json' } }) const json = await res.json(); console.log(event, json) return json; };

The information about the GraphQL mutation is available on the event object as data.

Replace <HTTP Trigger URL> with the URL you got after deploying your function. You can get the URL by going to the function in your Azure URL and click "Copy URL."

You also need to install the node-fetch module, which will grab the data from the API:

npm install --save node-fetch 8base logic configuration

The next thing to do is tell 8base what exact mutation or query that needs to trigger this logic. In our case, a create mutation on the Events table. You can describe this information in the 8base.yml file:

functions: triggerSchedule: handler: code: src/triggerSchedule.js type: trigger.after operation: Events.create

In a sense, this is saying, when a create mutation happens on the Events table, please call src/triggerSchedule.js after the mutation has occurred.

We want to deploy all the things

Before anything can be deployed, we need to login into the 8Base account, which we can do via command line:

8base login

Then, let’s run the deploy command to send and set up the app logic in your workspace instance.

8base deploy Testing the entire flow

To see the app in all its glory, click on one of the days of the calendar. You should get the event modal containing the form. Fill that out and put a future start date so we trigger a notification. Try a date more than 2-5 mins from the current time because I haven’t been able to trigger a notification any faster than that.

Yay, go check your email! The email should have arrived thanks to SendGrid. Now we have an app that allows us to create events and get notified with the details of the event submission.

The post Hey, let’s create a functional calendar app with the JAMstack appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.