Web Standards

All About React Router 4

Css Tricks - Mon, 08/07/2017 - 1:53am

I met Michael Jackson for the first time at React Rally 2016, soon after writing an article on React Router 3. Michael is one of the principal authors of React Router along with Ryan Florence. It was exciting to meet someone who built a tool I liked so much, but I was shocked when he said. "Let me show you our ideas React Router 4, it's way different!" Truthfully, I didn't understand the new direction and why it needed such big changes. Since the router is such a big part of an application's architecture, this would potentially change some patterns I've grown to love. The idea of these changes gave me anxiety. Considering community cohesiveness and being that React Router plays a huge role in so many React applications, I didn't know how the community would accept the changes.

A few months later, React Router 4 was released, and I could tell just from the Twitter buzz there was mixed feelings on the drastic re-write. It reminded me of the push-back the first version of React Router had for its progressive concepts. In some ways, earlier versions of React Router resembled our traditional mental model of what an application router "should be" by placing all the routes rules in one place. However, the use of nested JSX routes wasn't accepted by everyone. But just as JSX itself overcame its critics (at least most of them), many came around to believe that a nested JSX router was a pretty cool idea.

So, I learned React Router 4. Admittedly, it was a struggle the first day. The struggle was not with the API, but more so the patterns and strategy for using it. My mental model for using React Router 3 wasn't migrating well to v4. I would have to change how I thought about the relationship between the router and the layout components if I was going to be successful. Eventually, new patterns emerged that made sense to me and I became very happy with the router's new direction. React Router 4 allowed me to do everything I could do with v3, and more. Also, at first, I was over-complicating the use of v4. Once I gained a new mental model for it, I realized that this new direction is amazing!

My intentions for this article aren't to rehash the already well-written documentation for React Router 4. I will cover the most common API concepts, but the real focus is on patterns and strategies that I've found to be successful.

Here are some JavaScript concepts you need to be familiar with for this article:

If you're the type that prefers jumping right to a working demo, here you go:

View Demo

A New API and A New Mental Model

Earlier versions of React Router centralized the routing rules into one place, keeping them separate from layout components. Sure, the router could be partitioned and organized into several files, but conceptually the router was a unit, and basically a glorified configuration file.

Perhaps the best way to see how v4 is different is to write a simple two-page app in each version and compare. The example app has just two routes for a home page and a user's page.

Here it is in v3:

import { Router, Route, IndexRoute } from 'react-router' const PrimaryLayout = props => ( <div className="primary-layout"> <header> Our React Router 3 App </header> <main> {props.children} </main> </div> ) const HomePage =() => <div>Home Page</div> const UsersPage = () => <div>Users Page</div> const App = () => ( <Router history={browserHistory}> <Route path="/" component={PrimaryLayout}> <IndexRoute component={HomePage} /> <Route path="/users" component={UsersPage} /> </Route> </Router> ) render(<App />, document.getElementById('root'))

Here are some key concepts in v3 that are not true in v4 anymore:

  • The router is centralized to one place.
  • Layout and page nesting is derived by the nesting of <Route> components.
  • Layout and page components are completely naive that they are a part of a router.

React Router 4 does not advocate for a centralized router anymore. Instead, routing rules live within the layout and amongst the UI itself. As an example, here's the same application in v4:

import { BrowserRouter, Route } from 'react-router-dom' const PrimaryLayout = () => ( <div className="primary-layout"> <header> Our React Router 4 App </header> <main> <Route path="/" exact component={HomePage} /> <Route path="/users" component={UsersPage} /> </main> </div> ) const HomePage =() => <div>Home Page</div> const UsersPage = () => <div>Users Page</div> const App = () => ( <BrowserRouter> <PrimaryLayout /> </BrowserRouter> ) render(<App />, document.getElementById('root'))

New API Concept: Since our app is meant for the browser, we need to wrap it in <BrowserRouter> which comes from v4. Also notice we import from react-router-dom now (which means we npm install react-router-dom not react-router). Hint! It's called react-router-dom now because there's also a native version.

The first thing that stands out when looking at an app built with React Router v4 is that the "router" seems to be missing. In v3 the router was this giant thing we rendered directly to the DOM which orchestrated our application. Now, besides <BrowserRouter>, the first thing we throw into the DOM is our application itself.

Another v3-staple missing from the v4 example is the use of {props.children} to nest components. This is because in v4, wherever the <Route> component is written is where the sub-component will render to if the route matches.

Inclusive Routing

In the previous example, you may have noticed the exact prop. So what's that all about? V3 routing rules were "exclusive" which meant that only one route would win. V4 routes are "inclusive" by default which means more than one <Route> can match and render at the same time.

In the previous example, we're trying to render either the HomePage or the UsersPage depending on the path. If the exact prop were removed from the example, both the HomePage and UsersPage components would have rendered at the same time when visiting `/users` in the browser.

To understand the matching logic better, review path-to-regexp which is what v4 now uses to determine whether routes match the URL.

To demonstrate how inclusive routing is helpful, let's include a UserMenu in the header, but only if we're in the user's part of our application:

const PrimaryLayout = () => ( <div className="primary-layout"> <header> Our React Router 4 App <Route path="/users" component={UsersMenu} /> </header> <main> <Route path="/" exact component={HomePage} /> <Route path="/users" component={UsersPage} /> </main> </div> )

Now, when the user visits `/users`, both components will render. Something like this was doable in v3 with certain patterns, but it was more difficult. Thanks to v4's inclusive routes, it's now a breeze.

Exclusive Routing

If you need just one route to match in a group, use <Switch> to enable exclusive routing:

const PrimaryLayout = () => ( <div className="primary-layout"> <PrimaryHeader /> <main> <Switch> <Route path="/" exact component={HomePage} /> <Route path="/users/add" component={UserAddPage} /> <Route path="/users" component={UsersPage} /> <Redirect to="/" /> </Switch> </main> </div> )

Only one of the routes in a given <Switch> will render. We still need exact on the HomePage route though if we're going to list it first. Otherwise the home page route would match when visiting paths like `/users` or `/users/add`. In fact, strategic placement is the name-of-the-game when using an exclusive routing strategy (as it always has been with traditional routers). Notice that we strategically place the routes for /users/add before /users to ensure the correct matching. Since the path /users/add would match for `/users` and `/users/add`, putting the /users/add first is best.

Sure, we could put them in any order if we use exact in certain ways, but at least we have options.

The <Redirect> component will always do a browser-redirect if encountered, but when it's in a <Switch> statement, the redirect component only gets rendered if no other routes match first. To see how <Redirect> might be used in a non-switch circumstance, see Authorized Route below.

"Index Routes" and "Not Found"

While there is no more <IndexRoute> in v4, using <Route exact> achieves the same thing. Or if no routes resolved, then use <Switch> with <Redirect> to redirect to a default page with a valid path (as I did with HomePage in the example), or even a not-found page.

Nested Layouts

You're probably starting to anticipate nested sub layouts and how you might achieve them. I didn't think I would struggle with this concept, but I did. React Router v4 gives us a lot of options, which makes it powerful. Options, though, means the freedom to choose strategies that are not ideal. On the surface, nested layouts are trivial, but depending on your choices you may experience friction because of the way you organized the router.

To demonstrate, let's imagine that we want to expand our users section so we have a "browse users" page and a "user profile" page. We also want similar pages for products. Users and products both need sub-layout that are special and unique to each respective section. For example, each might have different navigation tabs. There are a few approaches to solve this, some good and some bad. The first approach is not very good but I want to show you so you don't fall into this trap. The second approach is much better.

For the first, let's modify our PrimaryLayout to accommodate the browsing and profile pages for users and products:

const PrimaryLayout = props => { return ( <div className="primary-layout"> <PrimaryHeader /> <main> <Switch> <Route path="/" exact component={HomePage} /> <Route path="/users" exact component={BrowseUsersPage} /> <Route path="/users/:userId" component={UserProfilePage} /> <Route path="/products" exact component={BrowseProductsPage} /> <Route path="/products/:productId" component={ProductProfilePage} /> <Redirect to="/" /> </Switch> </main> </div> ) }

While this does technically work, taking a closer look at the two user pages starts to reveal the problem:

const BrowseUsersPage = () => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <BrowseUserTable /> </div> </div> ) const UserProfilePage = props => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <UserProfile userId={props.match.params.userId} /> </div> </div> )

New API Concept: props.match is given to any component rendered by <Route>. As you can see, the userId is provided by props.match.params. See more in v4 documentation. Alternatively, if any component needs access to props.match but the component wasn't rendered by a <Route> directly, we can use the withRouter() Higher Order Component.

Each user page not only renders its respective content but also has to be concerned with the sub layout itself (and the sub layout is repeated for each). While this example is small and might seem trivial, repeated code can be a problem in a real application. Not to mention, each time a BrowseUsersPage or UserProfilePage is rendered, it will create a new instance of UserNav which means all of its lifecycle methods start over. Had the navigation tabs required initial network traffic, this would cause unnecessary requests — all because of how we decided to use the router.

Here's a different approach which is better:

const PrimaryLayout = props => { return ( <div className="primary-layout"> <PrimaryHeader /> <main> <Switch> <Route path="/" exact component={HomePage} /> <Route path="/users" component={UserSubLayout} /> <Route path="/products" component={ProductSubLayout} /> <Redirect to="/" /> </Switch> </main> </div> ) }

Instead of four routes corresponding to each of the user's and product's pages, we have two routes for each section's layout instead.

Notice the above routes do not use the exact prop anymore because we want /users to match any route that starts with /users and similarly for products.

With this strategy, it becomes the task of the sub layouts to render additional routes. Here's what the UserSubLayout could look like:

const UserSubLayout = () => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route path="/users" exact component={BrowseUsersPage} /> <Route path="/users/:userId" component={UserProfilePage} /> </Switch> </div> </div> )

The most obvious win in the new strategy is that the layout isn't repeated among all the user pages. It's a double win too because it won't have the same lifecycle problems as with the first example.

One thing to notice is that even though we're deeply nested in our layout structure, the routes still need to identify their full path in order to match. To save yourself the repetitive typing (and in case you decide to change the word "users" to something else), use props.match.path instead:

const UserSubLayout = props => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route path={props.match.path} exact component={BrowseUsersPage} /> <Route path={`${props.match.path}/:userId`} component={UserProfilePage} /> </Switch> </div> </div> ) Match

As we've seen so far, props.match is useful for knowing what userId the profile is rendering and also for writing our routes. The match object gives us several properties including match.params, match.path, match.url and several more.

match.path vs match.url

The differences between these two can seem unclear at first. Console logging them can sometimes reveal the same output making their differences even more unclear. For example, both these console logs will output the same value when the browser path is `/users`:

const UserSubLayout = ({ match }) => { console.log(match.url) // output: "/users" console.log(match.path) // output: "/users" return ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route path={match.path} exact component={BrowseUsersPage} /> <Route path={`${match.path}/:userId`} component={UserProfilePage} /> </Switch> </div> </div> ) }

ES2015 Concept: match is being destructured at the parameter level of the component function. This means we can type match.path instead of props.match.path.

While we can't see the difference yet, match.url is the actual path in the browser URL and match.path is the path written for the router. This is why they are the same, at least so far. However, if we did the same console logs one level deeper in UserProfilePage and visit `/users/5` in the browser, match.url would be "/users/5" and match.path would be "/users/:userId".

Which to choose?

If you're going to use one of these to help build your route paths, I urge you to choose match.path. Using match.url to build route paths will eventually lead a scenario that you don't want. Here's a scenario which happened to me. Inside a component like UserProfilePage (which is rendered when the user visits `/users/5`), I rendered sub components like these:

const UserComments = ({ match }) => ( <div>UserId: {match.params.userId}</div> ) const UserSettings = ({ match }) => ( <div>UserId: {match.params.userId}</div> ) const UserProfilePage = ({ match }) => ( <div> User Profile: <Route path={`${match.url}/comments`} component={UserComments} /> <Route path={`${match.path}/settings`} component={UserSettings} /> </div> )

To illustrate the problem, I'm rendering two sub components with one route path being made from match.url and one from match.path. Here's what happens when visiting these pages in the browser:

  • Visiting `/users/5/comments` renders "UserId: undefined".
  • Visiting `/users/5/settings` renders "UserId: 5".

So why does match.path work for helping to build our paths and match.url doesn't? The answer lies in the fact that {${match.url}/comments} is basically the same thing as if I had hard-coded {'/users/5/comments'}. Doing this means the subsequent component won't be able to fill match.params correctly because there were no params in the path, only a hardcoded 5.

It wasn't until later that I saw this part of the documentation and realized how important it was:

match:

  • path - (string) The path pattern used to match. Useful for building nested <Route>s
  • url - (string) The matched portion of the URL. Useful for building nested <Link>s
Avoiding Match Collisions

Let's assume the app we're making is a dashboard so we want to be able to add and edit users by visiting `/users/add` and `/users/5/edit`. But with the previous examples, users/:userId already points to a UserProfilePage. So does that mean that the route with users/:userId now needs to point to yet another sub-sub-layout to accomodate editing and the profile? I don't think so. Since both the edit and profile pages share the same user-sub-layout, this strategy works out fine:

const UserSubLayout = ({ match }) => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route exact path={props.match.path} component={BrowseUsersPage} /> <Route path={`${match.path}/add`} component={AddUserPage} /> <Route path={`${match.path}/:userId/edit`} component={EditUserPage} /> <Route path={`${match.path}/:userId`} component={UserProfilePage} /> </Switch> </div> </div> )

Notice that the add and edit routes strategically come before the profile route to ensure there the proper matching. Had the profile path been first, visiting `/users/add` would have matched the profile (because "add" would have matched the :userId.

Alternatively, we can put the profile route first if we make the path ${match.path}/:userId(\\d+) which ensures that :userId must be a number. Then visiting `/users/add` wouldn't create a conflict. I learned this trick in the docs for path-to-regexp.

Authorized Route

It's very common in applications to restrict the user's ability to visit certain routes depending on their login status. Also common is to have a "look-and-feel" for the unauthorized pages (like "log in" and "forgot password") vs the "look-and-feel" for the authorized ones (the main part of the application). To solve each of these needs, consider this main entry point to an application:

class App extends React.Component { render() { return ( <Provider store={store}> <BrowserRouter> <Switch> <Route path="/auth" component={UnauthorizedLayout} /> <AuthorizedRoute path="/app" component={PrimaryLayout} /> </Switch> </BrowserRouter> </Provider> ) } }

Using react-redux works very similarly with React Router v4 as it did before, simply wrap <BrowserRouter> in <Provider> and it's all set.

There are a few takeaways with this approach. The first being that I'm choosing between two top-level layouts depending on which section of the application we're in. Visiting paths like `/auth/login` or `/auth/forgot-password` will utilize the UnauthorizedLayout — one that looks appropriate for those contexts. When the user is logged in, we'll ensure all paths have an `/app` prefix which uses AuthorizedRoute to determine if the user is logged in or not. If the user tries to visit a page starting with `/app` and they aren't logged in, they will be redirected to the login page.

AuthorizedRoute isn't a part of v4 though. I made it myself with the help of v4 docs. One amazing new feature in v4 is the ability to create your own routes for specialized purposes. Instead of passing a component prop into <Route>, pass a render callback instead:

class AuthorizedRoute extends React.Component { componentWillMount() { getLoggedUser() } render() { const { component: Component, pending, logged, ...rest } = this.props return ( <Route {...rest} render={props => { if (pending) return <div>Loading...</div> return logged ? <Component {...this.props} /> : <Redirect to="/auth/login" /> }} /> ) } } const stateToProps = ({ loggedUserState }) => ({ pending: loggedUserState.pending, logged: loggedUserState.logged }) export default connect(stateToProps)(AuthorizedRoute)

While your login strategy might differ from mine, I use a network request to getLoggedUser() and plug pending and logged into Redux state. pending just means the request is still in route.

Click here to see a fully working Authentication Example at CodePen.

Other mentions

There's a lot of other cool aspects React Router v4. To wrap up though, let's be sure to mention a few small things so they don't catch you off guard.

<Link> vs <NavLink>

In v4, there are two ways to integrate an anchor tag with the router: <Link> and <NavLink>

<NavLink> works the same as <Link> but gives you some extra styling abilities depending on if the <NavLink> matches the browser's URL. For instance, in the example application, there is a <PrimaryHeader> component that looks like this:

const PrimaryHeader = () => ( <header className="primary-header"> <h1>Welcome to our app!</h1> <nav> <NavLink to="/app" exact activeClassName="active">Home</NavLink> <NavLink to="/app/users" activeClassName="active">Users</NavLink> <NavLink to="/app/products" activeClassName="active">Products</NavLink> </nav> </header> )

The use of <NavLink> allows me to set a class of active to whichever link is active. But also, notice that I can use exact on these as well. Without exact the home page link would be active when visiting `/app/users` because of the inclusive matching strategies of v4. In my personal experiences, <NavLink> with the option of exact is a lot more stable than the v3 <Link> equivalent.

URL Query Strings

There is no longer way to get the query-string of a URL from React Router v4. It seems to me that the decision was made because there is no standard for how to deal with complex query strings. So instead of v4 baking an opinion into the module, they decided to just let the developer choose how to deal with query strings. This is a good thing.

Personally, I use query-string which is made by the always awesome sindresorhus.

Dynamic Routes

One of the best parts about v4 is that almost everything (including <Route>) is just a React component. Routes aren't magical things anymore. We can render them conditionally whenever we want. Imagine an entire section of your application is available to route to when certain conditions are met. When those conditions aren't met, we can remove routes. We can even do some crazy cool recursive route stuff.

React Router 4 is easier because it's Just Components™

All About React Router 4 is a post from CSS-Tricks

What Would Augment Reality? (1-10)

LukeW - Sun, 08/06/2017 - 2:00pm

As the technology industry buzzes about Augmented Reality (AR) applications and hardware, I thought it would be worthwhile to apply Scott Jenson's value > pain theorem to AR gear. That is, "what value would exceed the pain of charging and wearing augmented reality headsets each day?" Are there enough compelling use cases to make AR a daily necessity?

To try and answer this question, I drafted a series of illustrations on "what would augment reality?". For each illustration I assumed audio input control and gaze path/eye-tracking for object identification. Due to these assumptions, I specifically tried to apply a principle of maximum information yet minimum obstruction (MIMO) to the user interface design.

My high-level goal, however, was to literally "augment" reality. That is, to give people abilities they wouldn't otherwise have through the inclusion of digital information and actions in the physical world. Here's my first ten attempts.

So does the value of these use cases outweigh the pain of wearing and charging an Augmented Reality headset each day? ...

How We Solve CSS Versioning Conflicts Here at New Relic

Css Tricks - Fri, 08/04/2017 - 4:09am

At first the title made me think of Git conflicts, but that's not what this is about. It's about namespacing selectors for components. Ultimately, they decided to use a Webpack loader (not open source, it doesn't appear) to prefix every single class with a hashed string representing the version. I guess that must happen in both the HTML and CSS so they match. Lots of folks landing on style-scoping in one way or another to solve their problems.

It makes me think about another smaller-in-scope issue. Say you have an alternate version of a header that you're going to send to 5% of your users. It has different HTML and CSS. Easy enough to send different HTML to users from the server. But CSS tends to be bundled, and it seems slightly impractical to make an entirely different CSS bundle for a handful of lines of CSS. One solution: add an additional attribute to the new header and ship the new CSS with artificially-boosted specificity to avoid all the old CSS. Then when you go live, remove the new attribute from both.

.header[new] { }

Direct Link to ArticlePermalink

How We Solve CSS Versioning Conflicts Here at New Relic is a post from CSS-Tricks

IntersectionObserver comes to Firefox

Css Tricks - Fri, 08/04/2017 - 3:46am

A great intro by Dan Callahan on why IntersectionObserver is so damn useful:

What do infinite scrolling, lazy loading, and online advertisements all have in common?

They need to know about—and react to—the visibility of elements on a page!

Unfortunately, knowing whether or not an element is visible has traditionally been difficult on the Web. Most solutions listen for scroll and resize events, then use DOM APIs like getBoundingClientRect() to manually calculate where elements are relative to the viewport. This usually works, but it's inefficient and doesn't take into account other ways in which an element's visibility can change, such as a large image finally loading higher up on the page, which pushes everything else downward.

The API is deliciously simple.

Direct Link to ArticlePermalink

IntersectionObserver comes to Firefox is a post from CSS-Tricks

Creating Photorealistic 3D Graphics on the Web

Css Tricks - Fri, 08/04/2017 - 3:40am

Before becoming a web developer, I worked in the visual effects industry, creating award-winning, high-end 3D effects for movies and TV Shows such as Tron, The Thing, Resident Evil, and Vikings. To be able to create these effects, we would need to use highly sophisticated animation software such as Maya, 3Ds Max or Houdini and do long hours of offline rendering on Render Farms that consisted of hundreds of machines. It's because I worked with these tools for so long that I am now amazed by the state of the current web technology. We can now create and display high-quality 3D content right inside the web browser, in real time, using WebGL and Three.js.

Here is an example of a project that is built using these technologies. You can find more projects that use three.js on their website.

Some projects using three.js

As the examples on the three.js website demonstrate, 3D visualizations have a vast potential in the domains of e-commerce, retail, entertainment, and advertisement.

WebGL is a low-level JavaScript API that enables creation and display of 3D content inside the browser using the GPU. Unfortunately, since WebGL is a low-level API, it can be a bit hard and tedious to use. You need to write hundreds of lines of code to perform even the simplest tasks. Three.js, on the other hand, is an open source JavaScript library that abstracts away the complexity of WebGL and allows you to create real-time 3D content in a much easier manner.

In this tutorial, I will be introducing the basics of the three.js library. It makes sense to start with a simple example to communicate the fundamentals better when introducing a new programming library but I would like to take this a step further. I will also aim to build a scene that is aesthetically pleasant and even photorealistic to a degree.

We will just start out with a simple plane and sphere but in the end it will end up looking like this:

See the Pen learning-threejs-final by Engin Arslan (@enginarslan) on CodePen.

Photorealism is the pinnacle of computer graphics but achieving is not necessarily a factor of the processing power at your disposal but a smart deployment of techniques from your toolbox. Here are a few techniques that you will be learning about in this tutorial that will help your scenes achieve photo-realism.

  • Color, Bump and Roughness Maps.
  • Physically Based Materials.
  • Lighting with Shadows.
Photorealistic 3D portrait by Ian Spriggs

The basic 3D principles and techniques that you will learn here are relevant in any other 3D content creation environment whether it is Blender, Unity, Maya or 3Ds Max.

This is going to be a long tutorial. If you are more of a video person or would like to learn more about the capabilities of three.js you should check out my video training on the subject from Lynda.com.

Requirements

When using three.js, if you are working locally, it helps to serve the HTML file through a local server to be able to load in scene assets such as external 3D geometry, images, etc. If you are looking for a server that is easy to setup, you can use Python to spin up a simple HTTP Server. Python is pre-installed on many operating systems.

You don't have to worry about setting a local dev server to follow this tutorial though. You will instead rely on data URL's to load in assets like images to remove the overhead of setting up a server. Using this method you will be able to easily execute your three.js scene in online code editors such as CodePen.

This tutorial assumes a prior, basic to intermediate, knowledge of JavaScript and some understanding of front-end web development. If you are not comfortable with JavaScript but want to get started with it in an easy manner you might want to check out the course/book "Coding for Visual Learners: Learning JavaScript with p5.js". (Disclaimer: I am the author)

Let's get started with building 3D graphics on the Web!

Getting Started

I have already prepared a Pen that you can use to follow this tutorial with.

The HTML code that you will be using is going to be super simple. It just needs to have a div element to host the canvas that is going the display the 3D graphics. It also loads up the three.js library (release 86) from a CDN.

<div id="webgl"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/86/three.min.js"></script>

Codepen hides some of the HTML structure that is currently present for your convenience. If you were building this scene on some other online editors or on your local your HTML would need to have something like this code below where main.js would be the file that would hold the JavaScript code.

<!DOCTYPE html> <html> <head> <title>Three.js</title> <style type="text/css"> html, body { margin: 0; padding: 0; overflow: hidden; } </style> </head> <body> <div id="webgl"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/86/three.min.js"></script> <script src="./main.js"></script> </body> </html>

Notice the simple CSS declaration inside the HTML. This is what you would have in the CSS tab of Codepen:

html, body { margin: 0; padding: 0; overflow: hidden; }

This is to ensure that you don't have any margin and padding values that might be applied by your browser and you don't get a scrollbar so that you can have the graphics fill the entire screen. This is all we need to get started with building 3D Graphics.

Part 1 - Three.js Scene Basics

When working with three.js and with 3D in general, there are a couple of required objects you need to have. These objects are scene, camera and the renderer.

First, you should create a scene. You can think of a scene object as a container for every other 3D object that you are going to work with. It represents the 3D world that you will be building. You can create the scene object by doing this:

var scene = new THREE.Scene();

Another thing that you need to have when working with 3D is the camera. Think of camera like the eyes that you will be viewing this 3D world through. When working with a 2D visualization, the concept of a camera usually doesn't exist. What you see is what you get. But in 3D, you need a camera to define your point of view as there are many positions and angles that you could be looking at a scene from. A camera doesn't only define a position but also other information like the field of view or the aspect ratio.

var camera = new THREE.PerspectiveCamera( 45, // field of view window.innerWidth / window.innerHeight, // aspect ratio 1, // near clipping plane (beyond which nothing is visible) 1000 // far clipping plane (beyond which nothing is visible) );

The camera captures the scene for display purposes but for us to actually see anything, the 3D data needs to be converted into a 2D image. This process is called rendering and you need a renderer to render the scene in three.js. You can initialize a renderer like this:

var renderer = new THREE.WebGLRenderer();

And then set the size of the renderer. This will dictate the size of the output image. You will make it cover the window size.

renderer.setSize(window.innerWidth, window.innerHeight);

To be able to display the results of the render you need to append the domElement property of the renderer to your HTML content. You will use the empty div element that you created that has the id webgl for this purpose.

document.getElementById('webgl').appendChild(renderer.domElement);

And having done all this you can call the render method on the renderer by providing the scene and the camera as the arguments.

renderer.render( scene, camera );

To have things a bit tidier, put everything inside a function called init and execute that function.

init();

And now you would see nothing... but a black screen. Don't worry, this is normal. The scene is working but since you didn't include any objects inside the scene, what you are looking at is basically empty space. Next, you will be populating this scene with 3D objects.

See the Pen learning-threejs-01 by Engin Arslan (@enginarslan) on CodePen.

Adding Objects to the Scene

Geometric objects in three.js are made up of two parts. A geometry that defines the shape of the object and a material that defines the surface quality, the appearance, of the object. The combination of these two things makes up a mesh in three.js which forms the 3D object.

Three.js allows you to create some simple shapes like a cube or a sphere in an easy manner. You can create a simple sphere by providing the radius value.

var geo = new THREE.SphereGeometry(1);

There are various kinds of materials that you could use on geometries. Materials determine how an object reacts to the scene lighting. We can use a material to make an object reflective, rough, transparent, etc.. The default material that three.js objects are created with is the MeshBasicMaterial. MeshBasicMaterial is not affected by the scene lighting at all. This means that your geometry is going to be visible even when there is no lighting in the scene. You can pass an object with a color property and a hex value to the MeshBasicMaterial to be able to set the desired color for the object. You will use this material for now but later update it to have your objects be affected by the scene lighting. You don't have any lighting in the scene for now so MeshBasicMaterial should be a good enough choice.

var material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });

You can combine the geometry and material to create a mesh which is going to form the 3D object.

var mesh = new THREE.Mesh(geometry, material);

Create a function to encapsulate this code that creates a sphere. You won't be creating more than one sphere in this tutorial but it is still good to keep things neat and tidy.

function getSphere(radius) { var geometry = new THREE.SphereGeometry(radius); var material = new THREE.MeshBasicMaterial({ color: 0x00ff00 }); var sphere = new THREE.Mesh(geometry, material); return sphere; } var sphere = getSphere(1);

Then you need to add this newly created object to the scene for it to be visible.

scene.add(sphere);

Let's check out the scene again. You will still see a black screen.

See the Pen learning-threejs-02 by Engin Arslan (@enginarslan) on CodePen.

The reason why you don't see anything right now is that whenever you add an object to the scene in three.js, the object gets placed at the center of the scene, at the coordinates of 0, 0, 0 for x, y and z. This simply means that you currently have the camera and the sphere at the same position. You should change the position of either one of them to be able to start seeing things.

3D coordinates

Let's move the camera 20 units on the z axis. This is achieved by setting the position.z property on the camera. 3D objects have position, rotation and scale properties that would allow you to transform them into the 3D space.

camera.position.z = 20;

You could move the camera on other axises as well.

camera.position.x = 0; camera.position.y = 5; camera.position.z = 20;

The camera is positioned higher now but the sphere is not at the center of the frame anymore. You need to point the camera to it. To be able to do so, you can call a method on the camera called lookAt. The lookAt method on the camera determines which point the camera is looking at. The points in the 3D space are represented by Vectors. So you can pass a new Vector3 object to this lookAt method to be able to have the camera look at the 0, 0, 0 coordinates.

camera.lookAt(new THREE.Vector3(0, 0, 0));

The sphere object doesn't look too smooth right now. The reason for that is the SphereGeometry function actually accepts two additional parameters, the width and height segments, that affects the resolution of the surface. Higher these values, smoother the curved surfaces will appear. I will set this value to be 24 for width and height segments.

var geo = new THREE.SphereGeometry(radius, 24, 24);

See the Pen learning-threejs-03 by Engin Arslan (@enginarslan) on CodePen.

Now you will create a simple plane geometry for the sphere to be sitting on. PlaneGeometry function requires a width and height parameter. In 3D, 2D objects don't have both of their sides rendering by default so you need to pass a side property to the material to have both sides of the plane geometry to render.

function getPlane(w, h) { var geo = new THREE.PlaneGeometry(w, h); var material = new THREE.MeshBasicMaterial({ color: 0x00ff00, side: THREE.DoubleSide, }); var mesh = new THREE.Mesh(geo, material); return mesh; }

You can now add this plane object to the scene as well. You will notice that the initial rotation of the plane geometry is parallel to the y-axis but you will likely need it to be horizontal for it to act as a ground plane. There is one important thing you should keep in mind regarding the rotations in three.js though. They use radians as a unit, not degrees. A rotation of 90 degrees in radians is equivalent to Math.PI/2.

var plane = getPlane(50, 50); scene.add(plane); plane.rotation.x = Math.PI/2;

When you created the sphere object, it got positioned using its center point. If you would like to move it above the ground plane then you can just increase its position.y value by the current radius amount. But that wouldn't be a programmatic way of doing things. If you would like the sphere to stay on the plane whatever its radius value is, you should make use of the radius value for the positioning.

sphere.position.y = sphere.geometry.parameters.radius;

See the Pen learning-threejs-04 by Engin Arslan (@enginarslan) on CodePen.

Animations

You are almost done with the first part of this tutorial. But before we wrap it up, I want to illustrate how to do animations in three.js. Animations in three.js make use of the requestAnimationFrame method on the window object which repeatedly executes a given function. It is somewhat like a setInterval function but optimized for the browser drawing performance.

Create an update function and pass the renderer, scene, and camera in there to execute the render method of the renderer inside this function. You will also declare a requestAnimationFrame function inside there and call this update function recursively from a callback function that is passed to the requestAnimationFrame function. It is better to illustrate this in code than to write about it.

function update(renderer, scene, camera) { renderer.render(scene, camera); requestAnimationFrame(function() { update(renderer, scene, camera); }); }

Everything might look same to you at this point but the core difference is that the requestAnimationFrame function is making the scene render around 60 frames per second with a recursive call to the update function. Which means that if you are to execute a statement inside the update function, that statement would get executed at around 60 times per second. Let's add a scaling animation to the sphere object. To be able to select the sphere object from inside the update function you could pass it as an argument but we will use a different technique. First, set a name attribute on the sphere object and give it a name of your liking.

sphere.name = 'sphere';

Inside the update function, you could find this object using its name by using the getObjectByName method on its parent object, the scene.

var sphere = scene.getObjectByName('sphere'); sphere.scale.x += 0.01; sphere.scale.z += 0.01;

With this code, the sphere is now scaling on its x and z axises. Our intention is not to create a scaling sphere though. We are setting up the update function so that you can leverage for different animations later on. Now that you have seen how it works you can remove this scaling animation.

See the Pen learning-threejs-05 by Engin Arslan (@enginarslan) on CodePen.

Part 2 - Adding Realism to the Scene

Currently, we are using MeshBasicMaterial which displays the given color even when there is no lighting in the scene which results in a very flat look. Real-world materials don't work this way though. The visibility of the surface in the real world depends on how much light is reflecting back from the surface back to our eyes. Three.js comes with a couple of different materials that provide a better approximation of how real-world surfaces behave and one of them is the MeshStandardMaterial. MeshStandardMaterial is a physically based rendering material that can help you achieve photorealistic results. This is the kind of material that modern game engines like Unreal or Unity use and is an industry standard in gaming and visual effects.

Let's start using the MeshStandardMaterial on our objects and change the color of the materials to white.

var material = new THREE.MeshStandardMaterial({ color: 0xffffff, });

You will once again get a black render at this point. That is normal. For objects to be visible we need to have lights in the scene. This wasn't a requirement with MeshBasicMaterial as it is a simple material that displays the given color at all conditions but other materials require an interaction with light to be visible. Let's create a SpotLight creating function. You will be creating two spotlights using this function.

function getSpotLight(color, intensity) { var light = new THREE.SpotLight(color, intensity); return light; } var spotLight_01 = getSpotlight(0xffffff, 1); scene.add(spotLight_01);

You might start seeing something at this point. Position the light and the camera a bit differently for a better framing and shading. Also create a secondary light as well.

var spotLight_02 = getSpotlight(0xffffff, 1); scene.add(spotLight_02); camera.position.x = 0; camera.position.y = 6; camera.position.z = 6; spotLight_01.position.x = 6; spotLight_01.position.y = 8; spotLight_01.position.z = -20; spotLight_02.position.x = -12; spotLight_02.position.y = 6; spotLight_02.position.z = -10;

Having done this you have two light sources in the scene, illuminating the sphere from two different positions. The lighting is helping a bit in understanding the dimensionality of the scene, but things are still looking extremely fake at this point because the lighting is missing a critical component: the shadows!

Rendering a shadow in Three.js is unfortunately not too straightforward. This is because shadows are computationally expensive and we need to activate shadow rendering on multiple places. First, you need to tell the renderer to start rendering shadows:

var renderer = new THREE.WebGLRenderer(); renderer.shadowMap.enabled = true;

Then you need to tell the light to cast shadows. Do that in the getSpotLight function.

light.castShadow = true;

You should also tell the objects to cast and/or receive shadows. In this case, you will make the sphere cast shadows and the plane to receive shadows.

mesh.castShadow = true; mesh.receiveShadow = true;

After all these settings we should start seeing shadows in the scene. Initially, they might be a bit lower quality. You can increase the resolution of the shadows by setting the light shadow map size.

light.shadow.mapSize.x = 4096; light.shadow.mapSize.y = 4096;

MeshStandardMaterial has a couple of properties such as the roughness and metalness that controls the interaction of the surface with the light. The properties take values in between 0 and 1 and they control the corresponding behavior of the surface. Increase the roughness value on the plane material to 1 to see the surface look more like a rubber as the reflections get blurrier.

// material adjustments var planeMaterial = plane.material; planeMaterial.roughness = 1;

We won't be using 1 as a value in this tutorial though. Feel free to experiment with values but set it back to 0.65 for roughness and 0.75 for metalness.

planeMaterial.roughness = 0.65; planeMaterial.metalness = 0.75;

Even though the scene should be looking much more promising right now it is still hard to call it realistic. The truth is, it is very hard to establish photorealism in 3D without using texture maps.

See the Pen learning-threejs-06 by Engin Arslan (@enginarslan) on CodePen.

Texture Maps

Texture maps are 2D images that can be mapped on a material for the purpose of providing surface detail. So far you were only getting solid colors on the surfaces but using a texture map you can map any image you would like on a surface. Texture maps are not only used to manipulate the color information of surfaces but they can also be used to manipulate other qualities of the surface like reflectiveness, shininess, roughness, etc.

Textures can be derived from photographic sources or can also be painted from scratch as well. For a texture to be useful in a 3D context it should be captured in a certain manner. Images that have reflections or shadows in them, or images where the perspective is too distorted wouldn't make great texture maps. There are several dedicated websites for finding textures online. One of them is textures.com which has a pretty good archive. They have some free download options but requires you to register to be able to do so. Another website for 3D textures is Megascans which does high resolution, high-quality environment scans that are of high-end production quality.

I have used a website called mb3d.co.uk for this example. This site provides seamless, free to use textures. A seamless texture implies a texture that can be repeated on the surface many times without having any discontinuations where the edges meet. This is the link to the texture file that I have used. I have decreased the size to 512px for width and height and converted the image file to data URI using an online service called ezgif to be able to include it as part of the JavaScript code as opposed to loading it in as a separate asset. (hint: don't include tags as you are outputting the data if you are to use this service)

Create a function that returns the data URI we have generated so that we don't have to put that huge string in the middle of our code.

function getTexture() { var data = 'data:image/jpeg;base64,/...'; // paste your data URI inside the quotation marks. return data }

Next, you need to load in the texture and apply it on the plane surface. You will be using the three.js TextureLoader for this purpose. After loading in the texture you will load in the texture to the map property of the desired material to have it as a color map on the surface.

var textureLoader = new THREE.TextureLoader(); var texture = textureLoader.load(getTexture()); planeMaterial.map = texture;

Things would be looking rather ugly right now as the texture on the surface is pixelated. The image is stretching too much to cover the entire surface. What you can do is to make the image repeat itself instead of scaling so that it doesn't get as pixelated. To do so, you need to set the wrapS and wrapT properties on the desired map to THREE.RepeatWrapping and specify a repetition value. Since you will be doing this for other kinds of maps as well (like bump or roughness map) it is better to create a loop for this:

var repetition = 6; var textures = ['map']// we will add 'bumpMap' and 'roughnessMap' textures.forEach((mapName) => { planeMaterial[mapName].wrapS = THREE.RepeatWrapping; planeMaterial[mapName].wrapT = THREE.RepeatWrapping; planeMaterial[mapName].repeat.set(repetition, repetition); });

This should look much better. Since the texture you are using is seamless you wouldn't notice any disconnections around the edges where the repetition happens.

Loading of a texture is actually an asynchronous operation. This means that your 3D scene is generated before the image file is loaded in. But since you are continuously rendering the scene using requestAnimationFrame this doesn't cause any issues in this example. If you weren't doing this, you would need to use callbacks or other async methods to manage the loading order.

See the Pen learning-threejs-07 by Engin Arslan (@enginarslan) on CodePen.

Other Texture Maps

As mentioned in the previous chapter, textures are not only used to define the color of the surfaces but to define other qualities of it as well. One other way that textures can be used as are bump maps. When used as a bump map, the brightness values of the texture simulates a height effect.

planeMaterial.bumpMap = texture;

Bump map should also be using the same repetition configuration as the color map so include it in the textures array.

var textures = ['map', 'bumpMap'];

With a bump map, brighter the value of a pixel, higher the corresponding surface would look. But a bump map doesn't actually change the surface, it just manipulates how the light interacts with the surface to create an illusion of uneven topology. The bump amount looks a bit too much right now. Bump maps work best when they are used in subtle amounts. So let's change the bumpScale parameter to something lower for a more subtle effect.

planeMaterial.bumpScale = 0.01;

Notice how this texture made a huge difference in appearance. The reflections are not perfect anymore but nicely broken up as they would be in real life. Another kind of map slot that is available to the StandardMaterial is the roughness map. A texture map used as a roughness map allows you to control the sharpness of the reflections using the brightness values of a given image.

planeMaterial.roughnessMap = texture; var textures = ['map', 'bumpMap', 'roughnessMap'];

According to the three.js documentation, the StandardMaterial works best when used in conjunction with an environment map. An environment map simulates a distant environment reflecting off of the reflective surfaces in the scene. It really helps when you are trying to simulate reflectivity on objects. Environment maps in three.js are in the form of cube maps. A Cube map is a panoramic view of a scene that is mapped inside a cube. A cube map is made up of 6 separate images that correspond to each face of a cube. Since loading 6 mode images inside an online editor is going to be a bit too much work you won't actually be using an environment map in this example. But to be able to make this sphere object a bit more interesting, add a roughness map to it as well. You will be using this texture but 320x320px in size and as a data URI.

Create a new function called getMetalTexture

function getMetalTexture() { var data = 'data:image/jpeg;base64,/...'; // paste your data URI inside the quotation marks. return data }

And apply it on the sphere material as bumpMap and roughnessMap:

var sphereMaterial = sphere.material; var metalTexture = textureLoader.load(getMetalTexture()); sphereMaterial.bumpMap = metalTexture; sphereMaterial.roughnessMap = metalTexture; sphereMaterial.bumpScale = 0.01; sphereMaterial.roughness = 0.75; sphereMaterial.metalness = 0.25;

See the Pen learning-threejs-08 by Engin Arslan (@enginarslan) on CodePen.

Wrapping it up!

You are almost done! Here you will do just a couple of small tweaks. You can see the final version of this scene file in this Pen.

Provide a non-white color to the lights. Notice how you can actually use CSS color values as strings to specify color:

var spotLight_01 = getSpotlight('rgb(145, 200, 255)', 1); var spotLight_02 = getSpotlight('rgb(255, 220, 180)', 1);

And add some subtle random flickering animation to the lights to add some life to the scene. First, assign a name property to the lights so you can locate them inside the update function using the getObjectByName method.

spotLight_01.name = 'spotLight_01'; spotLight_02.name = 'spotLight_02';

And then create the animation inside the update function using the Math.random() function.

var spotLight_01 = scene.getObjectByName('spotLight_01'); spotLight_01.intensity += (Math.random() - 0.5) * 0.15; spotLight_01.intensity = Math.abs(spotLight_01.intensity); var spotLight_02 = scene.getObjectByName('spotLight_02'); spotLight_02.intensity += (Math.random() - 0.5) * 0.05; spotLight_02.intensity = Math.abs(spotLight_02.intensity);

And as a bonus, inside the scene file, I have included the OrbitControls script for the three.js camera which means that you can actually drag your mouse on the scene to interact with the camera! I have also made it so that the scene resizes with the changing window size. I have achieved this using an external script for convenience.

See the Pen learning-threejs-final by Engin Arslan (@enginarslan) on CodePen.

Now, this scene is somewhat close to becoming photorealistic. There are still many missing pieces though. The sphere ball is too dark due to lack of reflections and ambient lighting. The ground plane is looking too flat in the glancing angles. The profile of the sphere is too perfect - it is CG (Computer Graphics) perfect. The lighting is not actually as realistic as it could be; It doesn't decay (lose intensity) with the distance from the source. You should also probably add particle effects, camera animation, and post-processing filters if you want to go all the way with this. But this still should be a good enough example to illustrate the power of three.js and the quality of graphics that you can create inside the browser. For more information on what you could achieve using this amazing library, you should definitely check out my new course on Lynda.com about the subject!

Thanks for making it this far! Hope you enjoyed this write-up and feel free to reach to me @inspiratory on Twitter or on my website with any questions you might have!

Creating Photorealistic 3D Graphics on the Web is a post from CSS-Tricks

Integrate Your Wufoo Forms Everywhere

Css Tricks - Thu, 08/03/2017 - 5:22am

At its heart, Wufoo is a form builder. If you need any type of form, you can build it super quickly by selecting and customizing the fields you need in Wufoo's fantastically easy to use form builder. I can hardly imagine a more useful web app for web designers and developers.

But what is a form, at its essence? Just a means to collect data. The important part is what you do with that data. You can do all the obvious stuff. You can have entries emailed to you. You can build reports from the data. You can explore the data inside Wufoo, or use the API to access the data outside of Wufoo.

Those things are just the tip of the iceberg of what you can do with data you collect with your Wufoo forms. There are built-in integrations! For example, say you have a form that includes an email address field, and you'd like to ship that email address over to MailChimp or Campaign Monitor into a particular mailing list. That's just a few clicks away. Or say the form has some element of lead generation and you want to send the details to Salesforce. Or you want to tweet data from the form upon submission. Same deal, just a few clicks.

One of my favorites is that Wufoo works tremendously well with Zapier. That's the whole point of Zapier, you can use it to connect services together! For example, the Ask a Question form over on the ShopTalk website not only emails Dave and I but adds the question to a Trello board for us to organize into shows. We could easily have it integrate into Evernote, dump into Google Sheets, or work with any of hundreds of other services. Wufoo is such a great source of data for integrations, it begs for playing with.

Direct Link to ArticlePermalink

Integrate Your Wufoo Forms Everywhere is a post from CSS-Tricks

If you really dislike FOUT, `font-display: optional` might be your jam

Css Tricks - Thu, 08/03/2017 - 5:13am

The story of FOUT is so fascinating. Browsers used to do it: show a "fallback" font while a custom font loads, then flop out the text once it has. The industry kinda hated it, because it felt jerky and could cause re-layout. So browsers changed and started hiding text until the custom font loaded. The industry hated that even more. Nothing worse than a page with no text at all!

Font loading got wicked complicated. Check out this video of Zach Leatherman and I talking it out.

Now browsers are saying, why don't we give control back to you in the form of API's and CSS. You can take control of the behavior with the font-display property (spec).

It seems like font-display: swap; gets most of the attention. It's for good reason. That's the value that brings back FOUT in the strongest way. The browser will not wait at all for an unloaded font. It will show text immediately in the best-matching-and-available font in the stack, then if/when a higher-matching font loads, it will "swap" to that.

@font-face { font-family: Merriweather; src: url(/path/to/fonts/Merriweather.woff) format('woff'); font-weight: 400; font-style: normal; font-display: swap; } body { font-family: Merriweather, /* Use this if it's immediately available, or swap to it if it gets downloaded within a few seconds */ Georgia, /* Use this if Merriweather isn't downloaded yet. */ serif; /* Bottom of the stack. System doesn't have Georgia. */ }

The people wanted their FOUT back, they got it. Well, as soon as font-display is supported in their browser, anyway. Right now we're looking at Chrome 60+ as the only browser shipping it (which means it won't be long for the rest of the Blink clan). It's behind a flag in Firefox already, so that's a good sign.

But what if you don't particularly like FOUT?

One option there is font-display: fallback;. It's slightly weirdly named, as it's a lot like the default behavior (auto or block). The difference is that it has a really short waiting period ("font block period"), ~100ms, where nothing is shown if the font isn't ready to go in that super short period, a fallback will be shown instead. Then the font has ~3s to get loaded and it will swap in, otherwise, it never will.

That seems pretty reasonable. What you're preventing there is a late-term swap, which is when it's the most awkward. Still that risk of FOUT though.

If you'd prefer that if the web font isn't immediately available, to just show the fallback font and not ever swap to it, even after it's downloaded. You can! That's what font-display: optional; does. It still gives the ~100ms font block period (giving the font a fighting chance to show up on first page view), but after that, the fallback is shown and will not swap. Chances are, the font did ultimately get downloaded, and next page view it will be cached and used.

This is font-display: optional; in Chrome 60. With a clean cache, the page loads with the fallback font. The font is downloaded, but not used. Then with the cache primed, the second page load uses the font.

I'm fairly open minded here. I can see how font-display: swap; is ideal for the best accessibility of content. I'm no big fan of FOUT, so I can see the appeal of font-display: optional;. I can also see how font-display: fallback; kinda splits the middle.

Aside from the browser support being vital in making use of this, there is also the matter of web font providers supporting it. For example, when using Google Fonts default font loading methods, you don't have an opportunity to use font-display because the @font-face blocks come from the Google-hosted stylesheets. There is an open discussion on that. There are ways to use Google Fonts with your own @font-face blocks though. Definitely consult Zach Leatherman's guide.

If you really dislike FOUT, `font-display: optional` might be your jam is a post from CSS-Tricks

Separate Form Submit Buttons That Go To Different URLs

Css Tricks - Wed, 08/02/2017 - 6:01am

This came up the other day. I forget where, but I jotted it down in my little notepad for blog post ideas. I wrote it down because what I was overhearing was way over-complicating things.

Say you have a form like this:

<form action="/submit"> <!-- inputs and stuff --> <input type="submit" value="Submit"> </form>

When you submit that form, it's going to go to the URL `/submit`. Say you need another submit button that submits to a different URL. It doesn't matter why. There is always a reason for things. The web is a big place and all that.

I web searched around for other ways people have tried handling this.

One way was to give up on submitting to different URL's, but give each submit button a shared name but different value, then check for that value when processing in order to do different things.

<input type="submit" name="action" value="Value-1"> <input type="submit" name="action" value="Value-2">

You could read that value during your processing and redirect if you desired. But that's a workaround for the stated problem.

Another way was to use JavaScript to change the action of the <form> when the button was clicked. There are any number of ways to do that, but it boils down to:

<form name="form"> <!-- inputs and stuff --> <input type="submit" onclick="javascript: form.action='/submit';"> <input type="submit" onclick="javascript: form.action='/submit-2';"> </form>

Which of course relies on JavaScript to work, which isn't a huge deal, but isn't quite as progressive enhancement friendly as it could be.

The correct answer (if I may be so bold) is that HTML has this covered for you already. Perhaps it's not as well-known as it should be. Hence the blog post here, I suppose.

It's all about the formaction attribute, which you can put directly on submit buttons, which overrides the action on the form itself.

<form action="/submit"> <input type="submit" value="Submit"> <input type="submit" value="Go Elsewhere" formaction="/elsewhere"> </form>

That's all. Carry on.

Separate Form Submit Buttons That Go To Different URLs is a post from CSS-Tricks

Making A Bar Chart with CSS Grid

Css Tricks - Wed, 08/02/2017 - 2:57am

Editors note: this post is just an experiment to play with new CSS properties and so the code below shouldn’t be used without serious improvements to accessibility.

I have a peculiar obsession with charts and for some reason, I want to figure out all the ways to make them with CSS. I guess that's for two reasons. First, I think it's interesting that there are a million different ways to style charts and data on the web. Second, it's great for learning about new and unfamiliar technologies. In this case: CSS Grid!

So this chart obsession of mine got me thinking: how would you go about making a plain ol' responsive bar chart with CSS Grid , like this:

See the Pen CSS Grid Chart Final by Robin Rendle (@robinrendle) on CodePen.

Let's take a look at how I got there!

The fast and easy approach

Since Grid can be confusing and weird at first glance, let's focus on making a really hacky prototype to begin with. To get started we need to write the markup for our chart:

<div class="chart"> <div class="bar-1"></div> <div class="bar-2"></div> <div class="bar-3"></div> <div class="bar-4"></div> <!-- all the way up to bar-12 --> </div>

Each of those bar- classes will make up one whole bar in our chart and, as yucky as this might seem, for now we won't worry too much about semantics or labelling the grid or the data. That'll come later – let's focus on the CSS so we can learn more about Grid.

Okay so with that we can now get styling! We need 12 bars in our chart with a 5px gap between them so we can set our parent class .chart with the relevant CSS Grid properties:

.chart { display: grid; grid-template-columns: repeat(12, 1fr); grid-column-gap: 5px; }

That's pretty straight forward if you're at all familiar with Grid but what it effectively describes is this: "I want 12 columns with each of the child elements having an equal width (1 fraction) with a 5px gap between them".

But now, here's the sneaky part: with Grid we can use the grid-template-rows property to set the height of each our chart's bars:

.chart { display: grid; grid-template-columns: repeat(12, 1fr); grid-template-rows: repeat(100, 1fr); grid-column-gap: 5px; }

We can use that neat new property to make 100 rows in our grid and this way we can then set each of our bars to be a percentage of that height and it'll make the math easy for us. Again, we're using that repeat() function so that each of our rows make up the same height.

Before I explain that all in more detail, let's give our chart a max-width and set it to the center of the screen with flex:

* { box-sizing: border-box; } html, body { margin: 0; background-color: #eee; display: flex; justify-content: center; } .chart { height: 100vh; width: 70vw; /* other chart styles go here */ }

At this point our chart will still be empty because we haven't told our child elements to take up any space in the grid. So let's fix that! We're going to select every class that contains bar and use the grid-row-start and grid-row-end properties to make them fill up the vertical space in our grid and so eventually we'll end up changing one of these properties to define the custom height of each bar:

[class*="bar"] { grid-row-start: 1; grid-row-end: 101; border-radius: 5px 5px 0 0; background-color: #ff4136; }

See the Pen CSS Grid Chart 1 by Robin Rendle (@robinrendle) on CodePen.

So if you're bewildered by those grid-row properties then that's okay! We're telling each of our bars to start at the very top of the grid (1) and then end at the very bottom (101). But why are we using 101 as a value for that property when we only told our grid to contain 100 rows? Let's explore that a little bit before we move on!

Grid lines

One of the peculiar things about Grid that I hadn't considered before working on this demo was the concept of grid lines which is super important to understanding this new layout tool. Here's an illustration of how grid lines are plotted in a four column, four row grid:

This new example contains four columns and four rows with the following styles:

.grid { grid-gap: 5px; grid-template-columns: repeat(4, 1fr); grid-template-rows: repeat(4, 1fr); } .special-col { grid-row: 2 / 4; background-color: #222; }

grid-row is a shorthand property for grid-row-start and grid-row-end with the first value where we want the element to start on the grid and the final value where we want it to end. But! This means we want that special element here to start at grid line 2 and end at grid line 4 – not at the end of row 4. If we wanted that black box to fill all 4 rows then we'd need it to end at line 5 or grid-row: 2 / 5 which makes an awful lot of sense if you think about it.

In other words, we shouldn't think of elements taking up whole rows or columns in a grid but rather only spanning between these grid lines. That took me a while to conceptually figure out and get used to after I dived into a recent tutorial by Jen Simmons on the matter.

Anyway!

Back to the demo

So that's the reason why in our chart demo we end all columns at line 101 and not 100 – because we want to it fill up the last row (100) so we have to send it to that particular grid line (101).

Now, since our .chart class uses vw/vh units, we also have a nicely responsive chart without having to do much work. If you resize that graph below you'll find it nicely packs down or stretches to always take up the whole viewport:

See the Pen CSS Grid Chart 1 by Robin Rendle (@robinrendle) on CodePen.

From here we can begin to style each of the individual bars to give them the right data, and there are a whole bunch of different ways we can do this. Let's take a look at just one of them.

First, let's imagine we want the first bar in our chart, .bar-1, to be 50/100 or half the height of the chart. We could write the following CSS and be done with it:

[class*="bar"] { grid-row-end: 101; } .bar-1 { grid-row-start: 50; }

See the Pen CSS Grid Chart 2 by Robin Rendle (@robinrendle) on CodePen.

That looks fine! But, here's the catch – what we're really declaring with that grid-row-start is for the bar to start at "50" and end at "101" but that's not really what we want. Here's an example: let's say the data changes in this hypothetical example and we need it to now be 20/100. Let's go back and change the value:

.bar-1 { grid-row-start: 20; }

See the Pen CSS Grid Chart 3 by Robin Rendle (@robinrendle) on CodePen.

That's not right! We want the bar not to start in the grid at 30 but be 30% the height of the chart height. We could change our value to grid-row-start: 20; or we could use the grid-row-end property instead, right? Well, not quite:

.bar-1 { grid-row-end: 20; }

See the Pen CSS Grid Chart 4 by Robin Rendle (@robinrendle) on CodePen.

The size of the bar is correct but the position is wrong because we're telling the bar to end at 30/100. So how do we fix this and make our code super easy to read? Well, one approach is to take use Sass to do the math for us. Ideally we'd like to write something like the following:

.bar-1 { // makes a bar that's 60/100 and positioned at the bottom of our chart @include chartValue(60); }

And no matter what value we put into that mixin we always want to get the correct height and position of the chart on the grid. The math that powers this mixin is actually pretty darn simple: all we need to do is take our value, deduct it from the total number of rows and then attach it to the grid-row-start property, like this:

@mixin chartValue($data) { $totalRows: 101; $result: $totalRows - $data; grid-row-start: $result; } .bar-1 { @include chartValue(20); }

See the Pen CSS Grid Chart 5 by Robin Rendle (@robinrendle) on CodePen.

So the final value that gets churned out by our Sass mixin is grid-row-start: 81 but our code is super legible! We don't even have to look at our grid to know what's going to happen – the chart item will be positioned at the bottom of the grid and the value will always be correct.

How do we create all those grid classes though? I think one neat approach is to just let Sass generate all those classes automatically for us. With just a little modification to our code we could do something like this:

$totalRows: 101; @mixin chartValue($data) { $result: $totalRows - $data; grid-row-start: $result; } @for $i from 1 through $totalRows { .bar-#{$i} { @include chartValue($i); } }

This will iterate over all of the rows in our chart and generate an individual class for that row size. And so now we could update our markup like so:

<div class="bar-45"></div> <div class="bar-100"></div> <div class="bar-63"></div> <div class="bar-11"></div>

See the Pen CSS Grid Chart 6 by Robin Rendle (@robinrendle) on CodePen.

And there we have it! We don't have to write individual classes for each of our elements by hand and we can easily update our chart by just changing the markup. This Sass loop will spit out a lot of classes that go unused but there's plenty of tools out there to strip those out.

One last thing we can do with our grid is style each column with a color by odd/even:

[class*="bar"]:nth-child(odd) { background-color: #ff4136; } [class*="bar"]:nth-child(even) { background-color: #0074d9; }

See the Pen CSS Grid Chart 7 by Robin Rendle (@robinrendle) on CodePen.

And there we have it! A responsive chart built with CSS Grid. There's plenty that we could do to tidy up this code, however. The first thing we should probably do is make sure that we're using semantic markup and use a tool to remove all those classes that are being spat out by our Sass loop. We could also dig into how this chart is rendered on mobile and think about how we ought to label each column and chart axis.

But for now, this is just the beginning. The TL;DR of this post: CSS Grid can be used for all sorts of things rather than just setting text and images next to each other. It opens up a whole new branch of web design for us to experiment with.

Making A Bar Chart with CSS Grid is a post from CSS-Tricks

Making The Most Of It: Hybrid App UX Design

Usability Geek - Tue, 08/01/2017 - 1:10pm
As more businesses fulfil their consumer’s demand for the mobile presence, many corporate decision-makers find themselves posed with a choice between two popular options: the native mobile app, or...
Categories: Web Standards

The Critical Request

Css Tricks - Tue, 08/01/2017 - 4:17am

Serving a website seems pretty simple: Send some HTML, the browser figures out what resources to load next. Then we wait patiently for the page to be ready.

Little may you know, a lot is going on under the hood.

Have you ever wondered how browser figures out which assets should be requested and in what order?

Today we're going to take a look at how we can use resource priorities to improve the speed of delivery.

Resource priority at work

Most browsers parse HTML using a streaming parser—assets are discovered within the markup before it has been fully delivered. As assets are found, they're added to a network queue along with a predetermined priority.

In Chrome today, there are a number of asset prioritization levels: Very Low, Low, Medium, High and Very high. Peeking into Chrome DevTools source shows that these are aliased to slightly different labels: Lowest, Low, Medium, High and Highest.

To see how your site is prioritizing requests, you can enable a priority column in the Chrome DevTools network request table.

If you're using Safari Technology preview, the (new!) Priority column can be shown in exactly the same way.

Show the Priority column by right clicking any of the request table headings.

You'll also find the priority for a given request in the Performance tab.

Resource timings and priorities are shown on hover
How does Chrome prioritize resources?

Each resource type (CSS, JavaScript, fonts, etc.) has their own set of rules that dictate how they'll be prioritized. What follows is a non-exhaustive list of network priority plans:

HTML— Highest priority.

Styles—Highest priority. Stylesheets that are referenced using an @import directive will also be prioritized Highest, but they'll be queued after blocking scripts.

Images are the only assets that can vary priority based on viewport heuristics. All images start with a priority of Low but will be upgraded to Medium priority when to be rendered within the visible viewport. Images outside the viewport (also known as "below the fold") will remain at Low priority.

During the process of researching this article, I discovered (with the help of Paul Irish) that Chrome DevTools are currently misreporting images that have been upgraded to Medium as Low priority. Paul wrote up a bug report, which you can track here.

If you're interested in reading the Chrome source that handles the image priority upgrade, start with UpdateAllImageResourcePriorities and ComputeResourcePriority.

Ajax/XHR/fetch()—High priority.

Scripts follow a complex loading prioritization scheme. (Jake Archibald wrote about this in detail during 2013. If you want to know the science behind it, I suggest you grab a cuppa and dive in). The TL;DR version is:

  • Scripts loaded using <script src="name.js"></script> will be prioritized as High if they appear in the markup before an image.
  • Scripts loaded using <script src="name.js"></script> will be prioritized as Medium if they're appear in the markup after an image.
  • Scripts that use async or defer attributes will be prioritized as Low.
  • Scripts using type="module" will be prioritized as Low.

Fonts are a bit of a strange beast; they're hugely important resources (who else loves the "'I see it!", "Now it's gone", "Whoa, a new font!" game?), so it makes sense that fonts are downloaded at the Highest priority.

Unfortunately, most @font-face rules are found within an external stylesheet (loaded using something like: <link rel="stylesheet" href="file.css">). This means that web fonts are usually delayed until after the stylesheet has downloaded.

Even if your CSS file references a @font-face font, it will not be requested until it is used on a selector and that selector matches an element on the page. If you've built a single page app that doesn't render any text until it renders, you're delaying the fonts even further.

What makes a request critical?

Most websites effectively ask the browser to load everything for the page to be fully rendered, there is no concrete concept of "above the fold".

Back in the day, browsers wouldn't make more than 6 simultaneous requests per domain?—?people hacked around this by using assets-1.domain.tld, assets-2.domain.tld hosts to increase the number of asynchronous downloads but failed to recognize that there would be a DNS hit and TCP connection for each new domain and asset.

While this approach had some merits, many of us didn't understand the full impacts and certainly didn't have good quality browser developer tools to confirm these experiments.

Thankfully today, we have great tools at our disposal. Using CNN as an example, let's identify assets that are absolutely required for the viewport to be visually ready (also known as useful to a someone trying to read it).

The user critical content is the masthead, and leading article.

There's really only 5 things that are necessary to display this screen (and not all of them need to be loaded before the site is usable):

  • Most importantly, the HTML. If all else fails the user can still read the page.
  • CSS
  • The logo (A PNG background-image placed by CSS. This could probably be an inline SVG).
  • 4(!) web font weights.
  • The leading article image.

These assets (note the lack of JavaScript) are essential to the visuals that make up the main viewport of the page. These assets are the ones that should be loaded first.

Diving into the performance panel in Chrome shows us that around 50 requests are made before the fonts and leading image are requested.

CNN.com becomes fully rendered somewhere around the 9s mark. This was recorded using a 4G connection, with reasonably spotty connectivity.

There's a clear mismatch between the requests that are required for viewing and the requests that are being made.

Controlling resource priorities

Now that we've defined what critical requests are, we can start to prioritize them using a few simple, powerful tweaks.

Preload (<link rel="preload" href="font.woff" as="font">) instructs the browser to add font.woff to the browser's download queue at a "High" priority.

Note: as="font" is the reason why font.woff would be downloaded as High priority?—?It's a font, so it follows the priority plan discussed earlier in the "How does Chrome prioritise resources?" section.

Essentially, you're telling the browser: You might not know it yet, but we're going to need this.

This is perfect for those critical requests that we identified earlier. Web fonts can nearly always be categorized as absolutely critical, but there are some fundamental issues with how fonts are discovered and loaded:

  • We wait for the CSS to be loaded, parsed and applied before @font-face rules are discovered.
  • A font is not added to the browser's network queue until it matches up its CSS rules to the DOM via the selectors.
  • This selector matching occurs during style recalculation. It doesn't necessarily happen immediately after download. It can be delayed if the main thread is busy.

In most cases, fonts are delayed by a number of seconds, just because we're not instructing the browser to download them in a timely fashion.

On a mobile device with a slow CPU, laggy connectivity, and without a properly-constructed fallback this can be an absolute deal breaker.

Preload in action: fonts

I ran two tests against calibreapp.com. On the first run, I'd changed nothing about the site at all. On the second, I added these two tags:

<link rel="preload" as="font" href="Calibre-Regular.woff2" type="font/woff2" crossorigin /> <link rel="preload" as="font" href="Calibre-Semibold.woff2" type="font/woff2" crossorigin />

Below, you'll see a visual comparison of the rendering of these two tests. The results are quite staggering:

The page rendered 3.5 seconds faster when the fonts were preloaded.

Bottom: Fonts are preloaded — The site finishes rendering in 5 seconds on a "emerging markets" 3G connection.

<link rel="preload"> also accepts a media="" attribute, which will selectively prioritize resources based on @media query rules:

<link rel="preload" href="article-lead-sm.jpg" as="image" type="image/jpeg" media="only screen and (max-width: 48rem)">

Here, we're able to preload a particular image for small screen devices. Perfect for that "main hero image".

As demonstrated above, a simple audit and a couple of tags later and we've vastly improved the delivery & render phase. Super.

Getting tough on web fonts

69% of sites use web fonts, and unfortunately, they're providing a sub-par experience in most cases. They appear, then disappear, then appear again, change weights and jolt the page around during the render sequence.

Frankly, this sucks on almost every level.

As you've seen above, controlling the request order and priority of fonts has a massive effect on render speed. It's clear that we should be looking to prioritize web font requests in most cases.

We can make further improvements using the CSS font-display property. allows us to control how fonts display during the process of web fonts being requested and loaded.

There are 4 options at your disposal, but I'd suggest using font-display: swap;, which will show the fallback font until the web font has loaded—at which point it'll be replaced.

Given a font stack like this:

body { font-family: Calibre, Helvetica, Arial; }

The browser will display Helvetica (or Arial, if you don't have Helvetica) until the Calibre font has loaded. Right now, Chrome and Opera are the only browsers that support font-display, but it's a step forward, and there's no reason to not use it starting today.

Keeping on top of page performance

As you're well aware, websites are never "complete". There are always improvements to be made and it can feel overwhelming, quickly.

Calibre is an automated tool for auditing performance, accessibility and web best practices, it'll help you stay on top of things. (Disclosure: I run Calibre.)

As you've seen above, there are a few metrics that are key to understanding user performance.

  • First paint, tells us when the browser goes from "nothing to something".
  • First meaningful paint, tells us when the browser has "rendered something useful".
  • Finally, First Interactive will tell you when the page has fully rendered, and the JavaScript main thread has settled (low CPU activity for a number of seconds).
Here, we set a budget on CNN’s "First meaningful paint" for <5 seconds.

You can set budgets against all these key user experience metrics. When those budgets are exceeded (or met) your team will be notified by Slack, email or wherever you like.

Calibre displays network request priority, so you can be sure of the requests being made. Tweak priorities and improve performance.

I hope that you've learned some valuable skills to audit requests and their priorities, as well as sparked a few ideas to experiment with to make meaningful performance improvements.

Your critical request checklist:
  • ? Enable the Priority column in Chrome DevTools.
  • ? Decide which requests must be made before users can see a fully rendered page.
  • ? Reduce the number of required critical requests where possible.
  • ? Use for assets that will probably be used on the next page of the site.
  • ? Use <link https://example.com/other/styles.css rel=preload as=style> nopush HTTP headers to tell the browser which resources to preload before the HTML has been fully delivered.
  • &#x1f6ab; HTTP/2 Server push is thorny. Probably avoid it. (See this informative document by Tom Bergan, Simon Pelchat and Michael Buettner, as well as Jake Archibald's "HTTP/2 Push is tougher than I thought")
  • ? Use font-display: swap; with web fonts where possible.
  • ? Are web fonts being used? Can they be removed? If no: prioritize them and use WOFF2!
  • ? Is a late loading script delaying your single page app from displaying anything at all?
  • &#x1f4f9; Check this great free screencast by Front End Center that shows how load webfonts with the best possible fallback experience.
  • &#x1f50d; View chrome://net-internals/#events and load a page — this logs network related events.
  • No request is faster than no request. ??

The Critical Request is a post from CSS-Tricks

Mobile App Pop-Up Guidelines

Usability Geek - Mon, 07/31/2017 - 12:46pm
Pop-ups, dialogues, those little boxes that appear on your screen, whatever you may call them, are not to be taken for granted. Although they are a relatively “small” element of your app,...
Categories: Web Standards

A Personal Journey to Fix a Grunt File Permissions issue

Css Tricks - Mon, 07/31/2017 - 2:34am

I was working on a personal project this past week and got a weird error when I tried to compile my Sass files. Unfortunately, I did not screenshot the exact error, but was something along the lines of this:

Failed to write to location cssmin: style.css EACCES

That EACCES code threw me for a loop but I was able to deduce that it was a file permissions issue, based on the error description saying it was unable to write the file and from some quick StackOverflow searching.

I couldn't find a lot of answers for how to fix this, so I made an attempt to fix it myself. Many of the forum threads I found suggested it could a permissions issue with Node, so I went through the painful exercise of reinstalling it and only made things worse because then Terminal could not even find the npm or grunt tasks.

I finally had to reach out to a buddy (thanks, Neal!) to sort things out. I thought I'd chronicle the process he took me through in case it helps other people down the road.

I'll preface all this by saying I am working on a Mac. That said, I doubt that everything covered here will be universally applicable for all platforms.

Reinstalling Node

Before I talked to Neal, I went ahead and reinstalled Node via the installer package on nodejs.org.

I was told that my permissions issue will likely come up again on future projects until I take the step to re-install Node yet again using a package manager. Some people use Homebrew, though I've seen forums caution against it for various reasons. Node Version Manager appears to be a solid option because it's designed specifically for managing multiple Node versions on the same machine, which makes it easy for future installations and version switching on projects.

Setting a Path to Node Modules

With Node installed, I expected I could point Terminal to my project:

cd /path/to/my/project/folder

...then install the Node modules I put in the project's package.json file:

npm install

Instead, I got this:

npm: command not found

Crap. Didn't I just reinstall Node?

Turns out Terminal (or more correctly, my system in general) was making an assumption about where Node was installed and needed to be told where to look.

I resolved this by opening the system's .bash_profile file and adding the path there. You can get all geeky and open the file up with Terminal using something like:

touch .bash_profile open .bask_profile

Or do what I did and simply open it with your code editor. It's an invisible file, but most code editors show those in the folder panel. The file should be located in your user directory, which is something like Macintosh HD > Users > Your User Folder.

This is what I added to ensure Node was being referenced in the correct location:

export PATH="$HOME/.npm-packages/bin:$HOME/.node/bin:$PATH"

That's basically telling the system to go look in the .npm-packages/bin/ directory for Node commands. In fact, If I browse into that folder, I do indeed see my NPM and Grunt commands, both of which Terminal previously could not locate.

With that little snippet in hand and saved to .bash_profile, I was stoked when I tried to install my Node modules again:

cd /path/to/my/project/folder npm install

...and saw everything load up as expected!

Updating Permissions

My commands were up and running again, but I was still getting the same EACCES error that I started with at the beginning of this post.

The issue was that the project folders were originally created as the Root user, probably when I was in Terminal using sudo for something else. As a result, running my Grunt tasks with my user account was not enough to give Grunt permission to write files to those folders.

I changed directory to the folder where Grunt was trying to write my style.css file, then checked the permissions of that folder:

cd /path/of/the/parent/directory ls -al

That displays the permissions for all of the files and folders in that directory. It showed me that the permissions for the folder were indeed not set for my user account to write files to the folder. I changed directory to that specific folder and used the following command to make myself the owner (replacing username with my username, of course):

sudo chown -R username:staff

I went to run my Grunt tasks again and, voila, everything ran, compiled and saved as I hoped.

In Summary

I know the issues I faced here may either be unique to me or of my own making, but I hope this helps someone else. As a web designer, the command line can be daunting or even a little scary to me, even though I feel I'm expected to be familiar with tools that rely on it.

It's great that we have community-driven sites like StackOverflow when we need them. At the same time, there is is often no substitute for reaching out for personal help.

A Personal Journey to Fix a Grunt File Permissions issue is a post from CSS-Tricks

Designing Between Ranges

Css Tricks - Mon, 07/31/2017 - 2:33am

CSS is written slowly, line by line, and with each passing moment, we limit the space of what’s possible for our design system. Take this example:

body { font-family: 'Proxima-Nova', sans-serif; color: #333; }

With just a couple of lines of CSS we’ve set the defaults for our entire codebase. Most elements (like paragraphs, lists and plain ol’ divs) will inherit these instructions. But what we rarely think about when we write CSS is that from here on out we’ll have to continuously override these rules if we want another typeface or color. And it’s those overrides that cause a lot of issues in terms of maintenance and scalability. And it’s those issues that cost us heartbreak, time and money.

In the example above that’s probably not an issue but I think in general we don’t recognize the true strength and dangers of the cascade and what it means to our design systems; most folks think that the cascade is designed to let us overwrite previous instructions but I would warn against that. Every time we override a style that is inherited we are making our codebase vastly more complex. How many hours have you spent inspecting an element and scrolled through each rule in order to understand why an element looks the way it does or how the long chain of inheritance has messed up your styles completely?

I think the reason for this is because we often don’t set the proper default styles up for our elements. And when we want to change the style of an element, instead of taking the time to question and refactor those default styles, we simply override them – making matters worse.

So I’ve been thinking a lot about how we ought to make a codebase that incentivizes us all to write better code and how to design our CSS so that it doesn’t encourage other people to make hacky classes to override things.

One of my favorite posts on this subject was written earlier this month by Brandon Smith where he describes the ways in which CSS can become so complicated (emphasis mine):

...never be more explicit than you need to be. Web pages are responsive by default. Writing good CSS means leveraging that fact instead of overriding it. Use percentages or viewport units instead of a media query if possible. Use min-width instead of width where you can. Think in terms of rules, in terms of what you really mean to say, instead of just adding properties until things look right. Try to get a feel for how the browser resolves layout and sizing, and make your changes and additions on top of that judiciously. Work with CSS, instead of against it.

One of Brandon’s suggestions is to avoid using the width property altogether and to stick to max-width or min-width instead. Throughout that post he reminds us that whatever we’re building should sit between ranges, between one of two extremes. So here’s an example of a class with a bad default style:

.button { width: 300px; }

Is that button likely to be that width always, everywhere? On mobile? In every media query? In every state? I highly doubt it and in fact, I believe that this class is just the sort that’s begging to be overwritten with yet another class:

.button-thats-just-a-bit-bigger { width: 310px; }

That’s a silly example but I’ve seen code like this in a lot of teams – and it’s not because the person was writing bad code but because the system encouraged them to write bad code. Everyone is on a deadline for shipping a product and most folks don’t have the time or the inclination to dive into a large codebase and refactor those core default styles I mentioned. It’s easier to just keep writing CSS because it solves all of their problems immediately.

Along these lines, Jeremy Keith wrote about this issue just the other day:

Unlike a programming language that requires knowledge of loops, variables, and other concepts, CSS is pretty easy to pick up. Maybe it’s because of this that it has gained the reputation of being simple. It is simple in the sense of “not complex”, but that doesn’t mean it’s easy. Mistaking “simple” for “easy” will only lead to heartache.

I think that’s what’s happened with some programmers coming to CSS for the first time. They’ve heard it’s simple, so they assume it’s easy. But then when they try to use it, it doesn’t work. It must be the fault of the language because they know that they are smart, and this is supposed to be easy. So they blame the language. They say it’s broken. And so they try to “fix” it by making it conform to a more programmatic way of thinking.

I can’t help but think that they would be less frustrated if they would accept that CSS is not easy. Simple, yes, but not easy.

The reason why CSS is simple is because of the cascade, but it’s not as easy as we might first think because of the cascade, too. It’s those default styles that filter down into everything that is the biggest strength and greatest weakness of the language. And I don’t think JavaScript-based solutions will help with that as much as everyone argues to the contrary.

So what’s the solution? I think that by designing in a range, thinking between states and looking at each and every line of CSS as a suggestion instead of an absolute, unbreakable law is the first step forward. What’s the step after that? Container queries. Container queries. Container queries.

Designing Between Ranges is a post from CSS-Tricks

What is Timeless Web Design?

Css Tricks - Fri, 07/28/2017 - 3:20am

Let's say you took on a client, and they wanted something very specific from you. They wanted a website that without any changes at all, would still look good in 10 years.

Turns out, when you pose this question to a bunch of web designers and developers, the responses are hugely variant!

The Bring It On Crowd

There are certain folks who see this as an intreguing challenge and would relish the opportunity.

Accept.

With glee.

— Jeremy Keith (@adactio) July 27, 2017

Accept the challenge &#x1f44c;

— Dick (@michaeldick) July 27, 2017

Totally deliver &#x1f44d;

— Kevin Nagurski (@knagurski) July 27, 2017

The "Keep It Simple" Crowd

This is mostly where my own mind went:

focus on minimalist design and great typography.

— Zach Leatherman (@zachleat) July 27, 2017

I would avoid extremes: no extremely vivid colors; a classic font set; clean but not overly minimal; moderate border radiuses

— keith•j•grant (@keithjgrant) July 27, 2017

Focus on the typography.

— Daniel Sellers (@daniel_sellers) July 27, 2017

Keep it clean and minimal with excellent typography.

— ? Sean Bunton (@seanbunton) July 27, 2017

Plus of course some nods to Motherfucking website and Better Motherfucking Website.

The "Nope" Crowd

An awful lot of folks would straight up just say no. Half?

Decline.

— Cennydd (@Cennydd) July 27, 2017

To be fair, we didn't exactly set the stage with a lot of detail here. I bet some folks imagined these clients as dummies that don't know what they are asking.

Decline the business, because that’s not a reasonable expectation.

— Len Damico (@lendamico) July 27, 2017

Refuse.

— Jez McKean (@jezmck) July 27, 2017

I wonder if the client presented themselves well, clearly knew what they were asking, and were happy to pay for it, if many of these designers would have responded differently.

Laugh

— Woman Code Warrior (@amberweinberg) July 28, 2017

Still, curious that so many designers didn't see any the challenge here, just the absurdity.

The "Let's Get Technical" Crowd

I'm partially in this group! What things can and should we reach for in this project, and what should we avoid?

A) system fonts
B) semantic HTML
C) let the browser's shifting sands on how to render the page update the site for you
D) git drnk

— Sean T. McBeth (@Sean_McBeth) July 28, 2017

mobile-first, svg graphics, time spent on ux analysis (card sort analysis etc), decoupled modular micro-service front+back end

— Adam Mackintosh (@agm1984) July 27, 2017

Honestly an SPA, no external dependents, and build an archive and entry mechanism to handle content. Wildcard redirect and buy 10 yr hosting

— Jeff (@etisdew) July 28, 2017

If you're looking for an actual answer besides no, maybe html only, inline styles, nothing dynamic, no external calls.

— David v2.9.5 (@davidlaietta) July 28, 2017

"No external calls" seems particularly smart here.

Based on experience and observation in my time in the industry, I'd say it's somewhere around 75% of websites are completely gone after 10 years, let alone particular URL's on those websites being exactly the same (reliable external resources).

The "It's About The Content" Crowd

Simple, driven by content

— Eric Shuff (@ericshuff) July 27, 2017

Focus on making it easy to publish and manage great content. Because great content can easily outlive 10 years.

— Deepti Boddapati (@DeeptiBoddapati) July 27, 2017

Very basic, content focus without all the pretty things.

— Bethany (@fleshskeleton) July 27, 2017

Minimal. Large text elements with a few images mixed in with the content and little color.

— Veronica Domeier (@hellodomeier) July 27, 2017

The "See Existing Examples" Crowd

Adopt the Craigslist design philosophy.

— Chris Cashdollar (@ccashdollar) July 27, 2017

Borrow from @daringfireball

— Joe Casabona (@jcasabona) July 27, 2017

Hand them the Google homepage.

— Darth Seh (@seh) July 27, 2017

https://t.co/wu3bjk5zoA that would work I guess

— Pascal (@murindwaz) July 27, 2017

Plus things like Wikipedia and Space Jam. Also see Brutalist Websites.

Interesting Takes

Our very own Robin Rendle had an interesting take. Due to population growth, growing networks, and mobile device ubiquity, they site may not want to be in English, especially if it has a global audience. Or at least, be in multiple major world languages.

Leave it to Sarah to come in for the side tackle:

shape the culture that uses the app/site to hate change

— Overactive Algorithm (@sarah_edo) July 28, 2017

And Christopher to give us some options to keep them on their toes:

A) say, “Is this one of those tricky Google job interview questions?”
B) say, “GeoCities lasted 10,000 years, so this will be easy.”
C) &#x1f440;

— Christopher Schmitt (@teleject) July 28, 2017

Why do any design at all?

text file on the server root

— Eric Bailey (@ericwbailey) July 27, 2017

Although I might argue in that case, you might as well make it an `.html` file instead of `.txt` so you can at least hyperlink things.

My Take

Clearly "it depends" on what this website is supposed to accomplish.

I can tell you what I picture though when I think about it.

I picture a business card style website. I picture black-and-white. I picture a clean and simple logo.

I picture really (really) good typography. Typography is way older than the web and will be around long after. Pick typefaces that have already stood the test of time.

I picture some, but minimal copy. Even words go stale.

I picture the bare minimum call to action. Probably just a displayed email address. I'd bet on email over phones.

Technically, I think you can count on HTML, CSS, and JavaScript. I don't think anything you can do now with those languages will be deprecated in 10 years.

Layout-wise, I'd look at doing as much as you can with viewport units. Screens will absolutely change in size and density in 10 years. Anything you can make SVG, make SVG. That will reduce worry about new screens. Responsive design is an obvious choice.

Anything that even passably smells like a trend, avoid.

Inputs will also definitely change. We're already starting to assume a touch screen. Presumably, you won't have to do anything overly interactive, but if you do, I wouldn't bet on a keyboard and mouse.

I'd also spend time on the hosting aspects. Register the domain name for the full 10 years. See if you can pre-buy hosting that long. Pick a company you're reasonably sure will last that long. Use a credit card that you're reasonable sure will last that long. Make sure anything that needs to renew renews automatically, like SSL certificates.

More thoughts, as always, welcome in the comments.

What is Timeless Web Design? is a post from CSS-Tricks

Chrome 60

Css Tricks - Fri, 07/28/2017 - 2:10am

The latest version of Chrome, version 60, is a pretty big deal for us front-end developers. Here’s the two most interesting things for me that just landed via Pete LePage where he outlines all the features in this video:

Direct Link to ArticlePermalink

Chrome 60 is a post from CSS-Tricks

Party Parrot

Css Tricks - Thu, 07/27/2017 - 11:11am

Just a bit of forking fun, kicked off by Tim Van Damme and the inimitable Party Parrot.

See the Pen :party: by Tim Van Damme (@maxvoltar) on CodePen.

See the Pen :party: by Chris Coyier (@chriscoyier) on CodePen.

See the Pen :party: by Steve Gardner (@steveg3003) on CodePen.

See the Pen :party: by Louis Hoebregts (@Mamboleoo) on CodePen.

See the Pen :party: by Martin Pitt (@nexii) on CodePen.

Party Parrot is a post from CSS-Tricks

The Ultimate Uploading Experience in 5 Minutes

Css Tricks - Thu, 07/27/2017 - 7:25am

Filestack is a web service that completely handles file uploads for your app.

Let's imagine a little web app together. The web app allows people to write reviews for anything they want. The give the review a name, type up their review, upload a photo, and publish it. Saving a name and text to a database is fairly easy, so the trickiest part about this little app is handling those photo uploads. Here's just a few considerations:

  • You'll need to design a UI. What does the area look like that encourage folks to pick a photo and upload it? What happens when they are ready to upload that photo and interact? You'll probably want to design that experience.
  • You'll likely want to support drag and drop, how is that going to work?
  • You'll probably want to show upload progress. That's just good UX.
  • A lot of people keep their files in Dropbox or other cloud services these days, can you upload from there?
  • What about multiple files? Might make sense to upload three or four images for a review!
  • Are you going to restrict sizes? The app should probably just handle that automatically, right?

That's certainly not a comprehensive list, but I think you can see how every bit of that is a bunch of design and development work. Well, hey, that's the job, right? It is, but the job is even more so about being smart with your time and money to make your app a success. Being smart here, in my opinion, is seriously looking at Filestack to give you a fantastic uploading experience, while you spend your time on your product vision, not already-solved problems.

With Filestack, as a developer, implementation is beautifully easy:

var client = filestack.init('yourApiKey'); client.pick(pickerOptions).then(function(result) { console.log(JSON.stringify(result.filesUploaded)) });

And you get an incredibly full featured picker like this:

Notice how the upload is so fast you barely notice it.

And that is completely configurable, of course. Their documentation is super nice, so figuring out how to do all that is no problem. Want to limit it to 3 files? Sure. Only images? Yep. Allow only particular upload integrations? That's your choice.

We started this talking about photos. Filestack is particularly good at handling those for you, in part because it means that you can offer a really great UX (upload whatever!) while also making sure you do all the right developer stuff with those photos. Like if a user uploads a 5 MB photo, no problem, you can allow them to drop it, alter it, and then you can serve a resized and optimized version of it.

Similiar stuff exists for video, audio, and other special file types. And don't let me stop you from checking out all the really fancy stuff, like facial recognition, filtering, collages, and all that.

Serve it from where? Well that's your call. If you really need to store your files somewhere specific, you can do that. Easier, you can let Filestack be your document store as well as your CDN for delivering the files.

Another thing to think about with this type of development work is maintenance. Rolling your own system means you're on your own when things change. Browsers evolve. The mobile landscape is ever-changing. API's change. Third-parties come and go and make life difficult. All of that stuff is abstracted away with Filestack.

With all this happening in JavaScript, Filestack works great with literally any way you're building a website. And if you have an iOS app as well, don't worry, they have an SDK for that too!

Direct Link to ArticlePermalink

The Ultimate Uploading Experience in 5 Minutes is a post from CSS-Tricks

The Browser Statistics That Matter

Css Tricks - Thu, 07/27/2017 - 3:08am

In which I argue that the only browser usage statistics that make sense use for decision making are the ones gathered from the website being worked on itself.

The reason you can’t use global statistics as a stand-in for your own is because they could be wildly wrong ... Sites like StatCounter that track the worldwide browser market are fascinating, but I’d argue largely exist as dinner party talk.

Direct Link to ArticlePermalink

The Browser Statistics That Matter is a post from CSS-Tricks

How to be evil (but please don’t!) – the modals & overlays edition

Css Tricks - Wed, 07/26/2017 - 2:00am

We've all been there. Landed on a website only to be slapped with a modal that looked something like the one below:

Hello darkness, my old friend.

For me that triggers a knee-jerk reaction: curse for giving them a pageview, close the tab, and never return. But there's also that off case when we might actually try to get to the info behind that modal. So the next step in this situation is to bring up DevTools in order to delete the modal and overlay, and maybe some other useless things that clutter up the page while we're at it.

This is where that page starts to dabble in evil.

We may not be able to get to the DevTools via "Inspect Element" because the context menu might be disabled. That one is easy, it only takes them one line of code:

addEventListener('contextmenu', e => e.preventDefault(), false);

But hey, no problem, that's what keyboard shortcuts are for, right? Right... unless there's a bit of JavaScript in the vein of the one below running:

addEventListener('keydown', e => { if(e.keyCode === 123 /* F12: Chrome, Edge dev tools */ || (e.shiftKey && e.ctrlKey && ( e.keyCode === 73 /* + I: Chrome, FF dev tools */ || e.keyCode === 67 /* + C: Chrome, FF inspect el */ || e.keyCode === 74 /* + J: Chrome, FF console */ || e.keyCode === 75 /* + K: FF web console */ || e.keyCode === 83 /* + S: FF debugger */ || e.keyCode === 69 /* + E: FF network */ || e.keyCode === 77 /* + M: FF responsive design mode */)) || (e.shiftKey && ( e.keyCode === 118 /* + F5: Firefox style editor */ || e.keyCode === 116 /* + F5: Firefox performance */)) || (e.ctrlKey && e.keyCode === 85 /* + U: Chrome, FF view source */)) { e.preventDefault(); } }, false);

Still, nothing can be done about the browser menus. That will finally bring up DevTools for us! Hooray! Delete those awful nodes and... see how the page refreshes because there's a mutation observer watching out for such actions. Something like the bit of code below:

addEventListener('DOMContentLoaded', e => { let observer = new MutationObserver((records) => { let reload = false; records.forEach((record) => { record.removedNodes.forEach((node) => { if(node.id === 'overlay' && !validCustomer()) reload = true; }); if(reload) window.location.reload(true); }); }); observer.observe( document.documentElement, { attributes: true, childList: true, subtree: true, attributeOldValue: true, characterData: true } ); });

"This is war!" mode activated! Alright, just change the modal and overlay styles then! Except, all the styles are inline, with !important on top of that, so there's no way we can override it all without touching that style attribute.

Oh, the !importance of it all...

Some people might remember about how 256 classes override an id, but this has changed in WebKit browsers in the meanwhile (still happens in Edge and Firefox though) and 256 IDs never could override an inline style anyway.

Well, just change the style attribute then. However, another "surprise" awaits: there's also a bit of JavaScript watching out for attribute changes:

if(record.type === 'attributes' && (record.target.matches && record.target.matches('body, #content, #overlay')) && record.attributeName === 'style' && !validCustomer()) reload = true;

So unless there are some styles that could help us get the overlay out of the way that the person coding this awful thing missed setting inline, we can't get past this by modifying style attributes.

The first thing to look for is the display property as setting that to none on the overlay solves all problems. Then there's the combo of position and z-index. Even if these are set inline on the actual overlay, if they're not set inline on the actual content below the overlay, then we have the option of setting an even higher z-index value rule for the content and bring it above the overlay. However, it's highly unlikely anyone would miss setting these.

If offsets (top, right, bottom, left) aren't set inline, they could help us shift the overlay off the page and the same goes for margin or translate properties. In a similar fashion, even if they're set inline on the overlay, but not on the actual content and on the parent of both the overlay and the content, then we can shift this parent laterally by something like 100vw and then shift just the content back into view.

At the end of the day, even a combination of opacity and pointer-events could work.

However, if the person coding the annoying overlay didn't miss setting any of these inline and we have that bit of JS... we cannot mess with the inline styles without making the page reload.

But wait! There's something that can override inline values with !important, something that's likely to be missed by many... and that's @keyframe animations! Adding the following bit of CSS in a new or already existing style element brings the overlay and modal behind the actual content:

#overlay { animation: a 1s infinite } @keyframes a { { 0%, to { z-index: -1 } } }

However, there might be a bit of JavaScript that prevents us from adding new style or link elements (to reference an external stylesheet) or modifying already existing ones to include the above bit of CSS, as it can be seen here.

if(record.type === 'characterData' && record.target.parentNode.tagName.toLowerCase() === 'style') reload = true; record.addedNodes.forEach((node) => { if(node.matches && node.matches('style:not([data-href]), link[rel="stylesheet"]')) reload = true; });

See the Pen How to be evil (but please don't) by Ana Tudor (@thebabydino) on CodePen.

But if there already is an external stylesheet, we could add our bit of CSS there and, save for checking for changes in a requestAnimationFrame loop, there is no way of detecting such a change (there have been talks about a style mutation observer, but currently, we don't yet have anything of the kind).

Of course, the animation property could also be set inline to none, which would prevent us from using this workaround.

But in that case, we could still disable JavaScript and remove the overlay and modal. Or just view the page source via the browser menu. The bottom line is: if the content is already there, behind the overlay, there is absolutely no sure way of preventing access to it. But trying to anyway is a sure way to get people to curse you.

I wrote this article because I actually experienced something similar recently. I eventually got to the content. It just took more time and it made me hate whoever coded that thing.

It could be argued that these overlays are pretty efficient when it comes to stopping non-technical people, which make up the vast majority of a website's visitors, but when they go to the trouble of trying to prevent bringing up DevTools, node deletions, style changes and the like, and pack the whole JavaScript that does this into hex on top of it all, they're not trying to stop just non-technical people.

How to be evil (but please don’t!) – the modals & overlays edition is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.