Tech News

Typespotting: Car dealership window painting

Nice Web Type - Fri, 06/15/2018 - 1:13pm

Perched on the corner of a busy intersection, normally all you see when you look at this SOMA car dealership is… well, a lot of cars in their showroom. I wasn’t too disappointed when their remodeling work gave me a fresh opportunity to look at some lettering instead.

I was impressed by the different styles at play, and I thought all of them seemed really fun. I counted at least three distinct variations, and I expected our visual search feature would match each with a different font from the Typekit library.

I started with the large, swashy script first.

Results of running the “Entrance” lettering through visual search.

The first two results here didn’t feel quite connected enough to me. Sarina seemed much closer — although it’s a little more sprawled-out than the original lettering and doesn’t quite match the style of the capital E.

Scriptorama sample

As I scrolled down the list of possible matches I also noticed Scriptorama, which fit the E a little better.

Then I tried finding a match for the blockier lettering.

Visual search results for bubbly, boxy lettering

The visual search engine seemed to have some trouble reading this one. Maybe the asymmetry was too much for its robot brain to handle. The off-center O in particular reminded me of Blackcurrant, which wasn’t anywhere in the search results. Typography of Coop seemed close, though.

Blackcurrant was even wackier than I remembered.

Finally I looked for a second script, this one a little more able to accommodate fine lines and details.

Visual search results for the smaller script lettering

Among the top results, Brush Script is great but maybe a little too beautiful for the setting here.

CC Sign Language

CC Sign Language showed up further down in the results, and while it definitely isn’t a script font, its personality actually seems just about right for this situation.

I stuck with showy Scriptorama for the larger “Sales Office” copy here, which captures a 1950s vibe that feels about as fun as the original sign did in my mind. For the smaller copy I used CC Sign Language, which gives a truly hand-painted character to the sign. It’s hard to imitate really good lettering with fonts, but I feel that the styles here come close to getting the right personality. Maybe I’ll revisit it next time I have a car to sell…

Seen any neat type in the wild lately? If you snap a photo of it, try sending it through our visual search to see what’s similar in the Typekit library – and let us know what you find!

Creating your own meme generator

Css Tricks - Fri, 06/15/2018 - 3:16am

Almost every time a new meme pops up in my Twitter feed, I think of a witty version to create. I'm not alone in this. Memes are often a way to acknowledge a shared experience or idea. In a variation of the "Is this a pigeon" meme that has been making the rounds online, a designer Daryl Ginn joked about the elementary nature of most applications that say they use artificial intelligence.

pic.twitter.com/nAHki0YFyV

— Daryl Ginn (@darylginn) May 16, 2018

Several people replied to his tweet saying something along the lines of "replace this with this." Daryl's version got them thinking about other possible variations. Platforms like imgFlip exist to make meme generations fast and easy. However, there is only so much customization they can allow. For many memes, creating new versions can only be done by people with Photoshop knowledge. But it doesn't have to be so! For some memes that require more than Impact for the font text on an image, a meme generator can be created using the HTML Canvas API. In this tutorial, we're going to make a generator for the #saltbae meme.

But first...

Let's look at some fun interactive meme examples!

The website pablo.life allows you to create your own Kanye West TLOP album cover by changing the text and image.

This is one of my favorites:

The digital agency R/GA created the Straight Outta Somewhere campaign where users "show the world where they're from by uploading their own photo and filling in the blank after 'Straight Outta ____.'" Users can download and share the meme.

Developer Isaac Hepworth created the Trump Executive Order Generator.

Spotify collaborated with Migos to create a range of downloadable Valentine's Day cards that can be customized by changing names.

Let's build our own meme generator!

Now, the tutorial. In a popular version of the #saltbae meme, instead of salt, Salt Bae (whose name is Nusret Gökçe) sprinkles something other than salt.

Loading an image

The first thing we have to do is load the original image onto the canvas. You can load an image one of two ways: from a URL or from one that exists in the DOM using the <img> tag but is hidden.

Here's how we do it with a hidden image tag:

<canvas id="canvas" width="1024" height="1024"> Canvas requires a browser that supports HTML5. </canvas> <img crossOrigin="Anonymous" id="salt-bae" src="http://res.cloudinary.com/dlwnmz6lr/image/upload/v1520011253/170203-salt-bae-mn-1530_060e5898cdcf7b58f97126d3cfbfdf71.nbcnews-ux-2880-1000_kllh1d.jpg"/>

I'm hosting the image on Cloudinary and added the crossOrigin attribute so we don't run into any CORS issues.

function drawImage(text) { const canvas = document.getElementById('canvas'); const ctx = canvas.getContext('2d'); ctx.clearRect(0, 0, canvas.width, canvas.height); const img = document.getElementById('salt-bae'); ctx.drawImage(img, 0, 0, canvas.width, canvas.height); } window.onload = function() { drawImage(); }

We're using the canvas drawImage function to draw the image to the canvas. It can be used to draw videos or parts of an image as well. The method provides different ways to do this. We're drawing the image by indicating the position and the width and height of the image.

ctx.drawImage(img, x, y, width, height);

Alternatively, we could load the image from a URL:

function loadAndDrawImage(src) { // Create an image object. (Not part of the dom) const image = new Image(); // After the image has loaded, draw it to the canvas image.onload = () => { // draw image }; // Then set the source of the image that we want to load image.src = src; }

Now we load in an image to replace the sprinkles Salt Bae is throwing. First, we load the image using one of the techniques I mentioned earlier, then we draw it to the screen like we did with the Salt Bae base image.

function getRandomInt(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min)) + min; //The maximum is exclusive and the minimum is inclusive } function drawBackgroundImage(canvas, ctx) { ctx.clearRect(0, 0, canvas.width, canvas.height); const img = document.getElementById('salt-bae'); ctx.drawImage(img, 0, 0, canvas.width, canvas.height); } function getRandomImageSize(min, max, width, height) { const ratio = width / height; // Used for aspect ratio width = getRandomInt(min, max); height = width / ratio; return { width, height }; } function drawSalt(src, canvas, ctx) { // Create an image object. (Not part of the dom) const image = new Image(); image.src = src; // After the image has loaded, draw it to the canvas image.onload = function() { for (let i = 0; i < 8; i++) { const randomX = getRandomInt(10, canvas.width/2); const randomY = getRandomInt(canvas.height-300, canvas.height); const dimensions = getRandomImageSize(20, 100, image.width, image.height); ctx.drawImage(image, randomX, randomY, dimensions.width, dimensions.height); } } return image; } onload = function() { const canvas = document.getElementById('canvas'); const ctx = canvas.getContext('2d'); drawBackgroundImage(canvas, ctx); const saltImage = drawSalt('http://res.cloudinary.com/dlwnmz6lr/image/upload/v1526005050/chadwick-boseman-inspired-workout-program-wide_phczey.webp', canvas, ctx); };

Now we can let users sprinkle something other than sprinkles.

Uploading an image

We're going to add a button that triggers an image upload and includes an event listener to listen for a change.

<input type="file" class="upload-image">` function updateImage(file, img){ img.src = URL.createObjectURL(file); } onload = function() { const canvas = document.getElementById('canvas'); const ctx = canvas.getContext('2d'); drawBackgroundImage(canvas, ctx); const saltImage = drawSalt('http://res.cloudinary.com/dlwnmz6lr/image/upload/v1526005050/chadwick-boseman-inspired-workout-program-wide_phczey.webp', canvas, ctx); const input = document.querySelector("input[type='file']"); /* * Add event listener to the input to listen for changes to its selected * value, i.e when files are selected */ input.addEventListener('change', function() { drawBackgroundImage(canvas, ctx); // clear canvas and re-draw updateImage(this.files[0], saltImage); }); };

URL.createObjectURL() creates a DOMString containing a URL representing the object given in the parameter which, in this case, is the uploaded file.

We can even up the game a little bit, like providing some default options. I've added a few emojis you can play around with as a starting point.

Downloading the final image

Once the new meme has been generated, we want users to be able to download and share it. The typical way of doing this is by opening the canvas in a new tab using the toDataURL method but the user would have to right click to save the image from that tab and that's not very convenient.

So, instead, we can take advantage of the download attribute added to links in HTML5. We create a link that, on click, sets the download attribute to the result of canvas.toDataURL. The toDataURL() method "returns a data URI containing a representation of the image in the format specified."

function addLink() { var link = document.createElement('a'); link.innerHTML = 'Download!'; link.addEventListener('click', function(e) { link.href = canvas.toDataURL(); link.download = "salt-bae.png"; }, false); link.className = "instruction"; document.querySelectorAll('section')[1].appendChild(link); }

Well that's it! Our meme generator is done.

Some cool links
  • Darius Kazemi has been making a bunch of twitter bots that generate memes.
  • Vox Media has a meme generator called meme that's open source.

Meme away!

The post Creating your own meme generator appeared first on CSS-Tricks.

More Unicode Patterns

Css Tricks - Thu, 06/14/2018 - 3:56am

Creating is the most intense excitement one can come to know.

Anni Albers, On Designing

I recently wrote a post — that was shared here on CSS-Tricks — where I looked at ways to use Unicode characters to create interesting (and random) patterns. Since then, I’ve continued to seek new characters to build new patterns. I even borrowed a book about Unicode from a local library.

(That's a really thick book, by the way.)

It's all up to your imagination to see the possible patterns a Unicode character can make. Although not all characters are good as patterns, the process is a good exercise for me.

And, aside from Unicode itself, the methods to build the patterns may not be so obvious. It usually takes a lot of inspiration and trial and error to come up with new ones.

More tiling

There are actually many ways to do tiling. Here’s one of my favorite tile patterns, which can be easily achieved using CSS grid:

.grid { /* using `dense` to fill gaps automatically. */ grid-auto-flow: dense; } .cell { /* using `span` to change cell size */ grid-column-end: span <num>; grid-row-end: span <num>; }

Grid Invaders by Miriam Suzanne is a good example of this technique.

Now, what I'm trying to do is put some Unicode characters into this grid. And most importantly, update the font-size value according to the span of its cell.

Pattern using characters \2f3c through \2f9f

I only tested with Chrome on Mac. Some of the examples may look awful on other browsers/platforms.

.cell { /* ... */ --n: <random-span>; grid-column-end: span var(--n); grid-row-end: span var(--n); } .cell:after { /* ... */ font-size: calc(var(--n) * 2vmin); }

It's a bit like the Tag Cloud effect, but with CSS. Lots of patterns can be made this way.

Pattern using characters \2686 through \2689 Pattern using charaters \21b0, \21b1, \21b2 and \21b4

The span of the columns and rows don't always have to be the same value. We can make small modifications by changing how many rows each cell spans:

.cell { /* only change the row span */ grid-row-end: span <num>; }

Since the font-size property scales up/down in both directions (vertically and horizontally), the scaleY() in the transform property will be used instead.

Pattern using characters \25c6 through \25c8 :after { /* ... */ transform: scaleY(calc(var(--span) * 1.4)); }

And here's another one, made by rotating the inner container of the grid to some degree.

The triangles also can be drawn with clip-path and will be more responsive, but it's nice to do something in a different way.

More modifications to the layout:

.column-odd { transform: skewY(40deg); } .column-even { transform: skewY(-40deg); }

Now follow these transformations for each column.

Pattern using characters \1690 through \1694 Composition

Many Unicode pairs share some kind of shape with different angles. For example, parentheses, brackets, and arrows with different that go in different directions. We can use this concept to combine the shapes and generate repeatable patterns.

This pattern uses less-than and greater-than signs for the base:

< and >" /> :nth-child(odd):after { content: '<'; } :nth-child(even):after { content: '>'; }

Here we go with parentheses:

A wavy pattern using ( and ) :nth-child(odd):after { content: '('; } :nth-child(even):after { content: ')'; }

These are characters we use everyday. However, they give us a fresh look and feeling when they are arranged in a new way.

There's another pair of characters, ?, and ?. Placing them in the grid and scaling to a proper value connect them together into a seamless pattern:

It's like weaving with characters! We can even take it up a notch by rotating things:

Pattern using \169b and \169c Rings

Last week, I joined a CodePen Challenge that challenged the group to make a design out of the sub and sup elements. As I experimented with them, I noticed that the two tags scaled down automatically when nested.

So, I tried to put them around a circle:

.first-level { /* Slice the circle into many segments. */ transform: rotate( calc(360deg / var(--slice) * var(--n)) ); }

Suddenly, I realized this method can be used to generate background patterns, too. The results are pretty nice.

Pattern using \003e sub:after, sup:after { content: '\003e'; }

The interesting thing is that changing a single character can end up with very different results.

Combining \002e and \003e together to form a pattern Combining \25c9 and \2234 creates a different effect in the same circular layout Wrapping up

That's all for now! The color palettes used in this article are from Color Hunt and Coolors.co.

The examples are generated with css-doodle, except for Ring examples in the last section. Everything here can be found in this CodePen collection.

Hope you like them and thanks for reading!

The post More Unicode Patterns appeared first on CSS-Tricks.

?Truly understand your site visitors’ behavior

Css Tricks - Thu, 06/14/2018 - 2:51am

(This is a sponsored post.)

Hotjar is a quick and easy way to truly understand your visitors and identify opportunities for improvement and growth.

Try the all-in-one analytics and feedback tool for free.

Direct Link to ArticlePermalink

The post ?Truly understand your site visitors’ behavior appeared first on CSS-Tricks.

UX Case Study: Spotify Vs. Apple Music Mobile Apps

Usability Geek - Wed, 06/13/2018 - 12:47pm
We are back in action! It has been awhile since our last UX Case Study so that I will give a brief refresher for our returning readers and an intro for the uninitiated. *Ahem*. In a blogosphere full...
Categories: Web Standards

Understanding the Almighty Reducer

Css Tricks - Wed, 06/13/2018 - 4:34am

I was recently mentoring someone who had trouble with the .reduce() method in JavaScript. Namely, how you get from this:

const nums = [1, 2, 3] let value = 0 for (let i = 0; i < nums.length; i++) { value += nums[i] }

...to this:

const nums = [1, 2, 3] const value = nums.reduce((ac, next) => ac + next, 0)

They are functionally equivalent and they both sum up all the numbers in the array, but there is a bit of paradigm shift between them. Let's explore reducers for a moment because they're powerful, and important to have in your programming toolbox. There are literally hundreds of other articles on reducers out there, and I'll link up some of my favorites at the end.

What is a reducer?

The first and most important thing to understand about a reducer is that it will always only return one value. The job of a reducer is to reduce. That one value can be a number, a string, an array or an object, but it will always only be one. Reducers are really great for a lot of things, but they're especially useful for applying a bit of logic to a group of values and ending up with another single result.

That's the other thing to mention: reducers will not, by their nature, mutate your initial value; rather they return something else. Let's walk over that first example so you can see what's happening here. The video below explains:

Your browser does not support the video tag.

It might be helpful to watch the video to see how the progression occurs, but here's the code we're looking at:

const nums = [1, 2, 3] let value = 0 for (let i = 0; i < nums.length; i++) { value += nums[i] }

We have our array (1, 2, 3) and the first value each number in the array will be added to (0). We walk through the amount of the array and add them to the initial value.

Let's try this a little differently:

const nums = [1, 2, 3] const initialValue = 0 const reducer = function (acc, item) { return acc + item } const total = nums.reduce(reducer, initialValue)

Now we have the same array, but this time we're not mutating that first value. Instead, we have an initialValue that will only be used at the start. Next, we can make a function that takes an accumulator ?and an item. The accumulator is the collected value? returned in the last invocation that informs the function what the next value will be added to. In this case of addition, you can think of it as a snowball rolling down a mountain that eats up each value in its path as it grows in size by every eaten value.

We’ll use .reduce() to apply the function and start from that initial value. This can be shortened with an arrow function:

const nums = [1, 2, 3] const initialValue = 0 const reducer = (acc, item) => { return acc + item } const total = nums.reduce(reducer, initialValue)

And then shortened some more! Implicit returns for the win!

const nums = [1, 2, 3] const initialValue = 0 const reducer = (acc, item) => acc + item const total = nums.reduce(reducer, initialValue)

Now we can apply the function right where we called it, and we can also plop that initial value directly in there!

const nums = [1, 2, 3] const total = nums.reduce((acc, item) => acc + item,

An accumulator can be an intimidating term, so you can think of it like the current state of the array as we're applying the logic on the callback's invocations.

The Call Stack

In case it's not clear what's happening, let's log out what's going on for each iteration. The reduce is using a callback function that will run for each item in the array. IThe following demo will help to make this more clear. I've also used a different array ([1, 3, 6]) because having the numbers be the same as the index could be confusing.

See the Pen showing acc, item, return by Sarah Drasner (@sdras) on CodePen.

When we run this, we'll see this output in the console:

"Acc: 0, Item: 1, Return value: 1" "Acc: 1, Item: 3, Return value: 4" "Acc: 4, Item: 6, Return value: 10"

Here's a more visual breakdown:

Your browser does not support the video tag.

  1. It shows that the accumulator is starting at our initial value, 0
  2. Then we have the first item, which is 1, so our return value is 1 (0 + 1 = 1)
  3. 1 becomes the accumulator on the next invocation
  4. Now we have 1 as the accumulator and 3 is the item aince it is next in the array.
  5. The returned value becomes 4 (1 + 3 = 4)
  6. That, in turn, becomes the accumulator and the next item at invocation is 6
  7. That results in 10 (4 + 6 = 10) and is our final value since 6 is the last number in the array
Simple Examples

Now that we've got that under our belt, let's look at some common and useful things reducers can do.

How many of X do we have?

Let's say you have an array of numbers and you want to return an object that reports the number of times those numbers occur in the array. Note that this could just as easily apply to strings.

const nums = [3, 5, 6, 82, 1, 4, 3, 5, 82] const result = nums.reduce((tally, amt) => { tally[amt] ? tally[amt]++ : tally[amt] = 1 return tally }, {}) console.log(result)

See the Pen simplified reduce by Sarah Drasner (@sdras) on CodePen.

Wait, what did we just do?

Initially, we have an array and the object we’re going to put its contents into. In our reducer, we ask: does this item exist? If so, let's increment it. If not, add it and set it to 1. At the end, please return the tally count of each item. Then, we run the reduce function, passing in both the reducer and the initial value.

Take an array and turn it into an object that shows some conditions

Let’s say we have an array and we want to create an object based on a set of conditions. Reduce can be great for this! Here, we want to create an object out of any instance of a number contained in the array and show both an odd and even version of this number. If the number is already even or odd, then that’s what we’ll have in the object.

const nums = [3, 5, 6, 82, 1, 4, 3, 5, 82] // we're going to make an object from an even and odd // version of each instance of a number const result = nums.reduce((acc, item) => { acc[item] = { odd: item % 2 ? item : item - 1, even: item % 2 ? item + 1 : item } return acc }, {}) console.log(result)

See the Pen simplified reduce by Sarah Drasner (@sdras) on CodePen.

This will shoot out the following output in the console:

1:{odd: 1, even: 2} 3:{odd: 3, even: 4} 4:{odd: 3, even: 4} 5:{odd: 5, even: 6} 6:{odd: 5, even: 6} 82:{odd: 81, even: 82}

OK, so what's happening?

As we’re going through every item in the array, we create a property for even and odd, and based on an inline condition with a modulus operator, we’ll either store the number or increment it by 1. The modulus operator is really good for this because it can quickly check for even or odd — if it's divisible by two, it's even, if not, it's odd.

Other resources

At the top, I mentioned other posts out there that are handy resources to get more familiar with the role of reducers. Here are a few of my favorites:

  • The MDN documentation is wonderful for this. Seriously, it's one of their best posts, IMO. They also describe in a bit more detail what happens if you don't provide an initial value, which we didn't cover in this post.
  • Daniel Shiffman is always amazing at explaining things on Coding Train.
  • A Drip of JavaScript does a good job, too.

The post Understanding the Almighty Reducer appeared first on CSS-Tricks.

Your Brain on Front-End Development

Css Tricks - Wed, 06/13/2018 - 3:59am

Part of the job of being a front-end developer is applying different techniques and technologies to pull off the desired UI and UX. Perhaps you work with a design team and implement their designs. I know when I look at a design (heck, even if I know I'm not going to be building it), my front-end brain starts triggering all sorts of things I know will be related to the task.

Let's take a look at what I mean.

Check out this lovely Dribbble shot for a Food Recipe Website from Riko Sapto Dimo.

It's a very appealing design, and there is loads in there to think about from a front-end web design and development standpoint.

We're going to mostly be talking about design pattern choices and HTML/CSS tech choices. There is much more to the job of front-end development. Accessibility! Performance! Semantics! Design systems! All important stuff as well.

Multi-line padded text

Ah yes, that look where text has a background that follows the length of the lines of text. We've called that Multi-Line Padded Text in the past and looked at a number of ways to do it. The easiest and most modern way to handle it is with box-decoration-break.

See the Pen Multiline Padding with box-decoration-break by Chris Coyier (@chriscoyier) on CodePen.

Flexbox header

That header area is just begging for flexbox. It's a single-direction layout with elements of different sizes and different space between them. Expressing that in flexbox is going to be easier than any other method and not require any fixed sizing or magic numbers — not to mention flexible!

Grid layout

The overall page layout here could be expressed nicely with CSS grid. Remember that flexbox and grid are not at odds. An element placed in a grid cell can be flexbox! Like the header above, that makes perfect sense. The main content area and footer, as grid cells, could probably go either way.

Vertical writing

Not the most obvious thing to pull off! Your best bet is using writing modes. Jen Simmons has written about this, and here's a demo:

See the Pen Writing Mode Demo — Headline by Jen Simmons (@jensimmons) on CodePen.

Line clamping

Looks like we have some truncation going on here. General performance-wise, we'd probably be wanting the data being sent only be a few lines long. But the front end can help with this too, if it has to. Three lines of text are shown here with ellipsis at the end. Perhaps the design really needs the copy to always be a maximum of three lines. That's called line clamping.

See the Pen Line Clampin' by Chris Coyier (@chriscoyier) on CodePen.

Custom fonts

Like most sites these days, this design is coated in custom web fonts. With a design this striking, I'd be very careful about my font loading technique. My gut tells me I'd be more into FOIT than FOUT here, and ideally I'd cache that font file as hard as I could so that we'd have neither as often as possible.

Text over images

That text "Dinner Menu" is squarely over some busy photographic imagery below. It's still readable though, largely because of the bright white of the text over a darkened image. We've covered thinking this through in the past in detail. White text over a darkened image is generally the way to go, and darkened enough such that just about any image will be OK. There are other options though, like gradients and blurring (which is also in use here in the footer)

See the Pen ByKwaq by Chris Coyier (@chriscoyier) on CodePen.

SVG icons / Star ratings

There are a number of simple, vector icons scattered around the design. Those are a sure-bet for an SVG icon system. This is my current recommendation for approaching an SVG icon system. Inline the SVG. Simple and powerful.

Those star ratings are probably SVG territory as well. Here's a good collection of options. Progressively enhancing from radio buttons always seems like a smart way to go:

See the Pen CSS: Radio Input Stars by Jake Albaugh (@jakealbaugh) on CodePen.

Hamburgers

It might seem a little superfluous on a large screen design like this, especially as there is navigation already visible. But hey, it's hard to avoid these days and there is something to be said about training users where site navigation can happen regardless of where you're looking at the site.

Here's a collection of those type of menus.

See the Pen Hamburger menu flip with text change by Eric Grucza (@egrucza) on CodePen.

Anything else in the design I didn't mention that your mind goes to right away?

The post Your Brain on Front-End Development appeared first on CSS-Tricks.

Hidden Treasures: Fonts nearly lost to history re-emerge

Nice Web Type - Tue, 06/12/2018 - 2:09pm

We’re delighted to introduce Hidden Treasures of the Bauhaus Dessau, a font collection inspired by original type specimens the Bauhaus artists left undeveloped.

Images from Hidden Treasures campaign.

Fittingly, these fonts are now a reality thanks to type design students a few generations beyond the Bauhaus era. Project co-coordinators Erik Spiekermann and Ferdinand Ulrich worked with instructors at five different design schools to nominate students to participate in the project. Under the supervision of Spiekermann and Ulrich, the students designed fresh new fonts based on the original source materials.

We’re making the fonts freely available, and will add them to our library as they are released. Keep an eye on this space over the summer as new fonts appear!

Joschmi
When designing Joshmi, Flavia Zimbardi referenced the design of a lesser-known stencil alphabet by Joost Schmidt, who instructed many other type designers of the Bauhaus Dessau. She had only six of the original letterforms to work with: a, b, c, d, e, and g. Based in New York, Zimbardi hails from Rio de Janeiro and is a graduate of the Type@Cooper Extended Program.

Xants
Designed by Luca Pellegrini, Xants is based on an alphabet by Swiss-Italian designer Xanti Schawinsky that combines stencil characteristics with a neo-classical stroke contrast for a unique mix of lettering influences in one place. Pelligrini is a second-year student at the MA program in type design at the University of Art and Design/ECAL in Lausanne, Switzerland.

Visit the Hidden Treasures campaign page for more history about the project, design challenges, and more.

What was the Bauhaus?

Bauhaus design school. Image from the Hidden Treasures campaign.

The Bauhaus was a German design school that became celebrated for its novel holistic approach to design. Artists like Paul Klee and Wassily Kandinsky spent time at the school as instructors, and the broadly-themed curriculum included courses on lettering, which typographer Joost Schmidt taught for seven years.

Schmidt, his students, and a handful of other teachers at the school worked on numerous lettering projects over the years to accompany the designs coming from all corners of the Bauhaus. Many of the letters incorporated geometric features in the “modernist” tradition. A similar geometric, unadorned character imbued much of the design work emerging from the Bauhaus, often earning strong reactions from the public when unveiled.

The school was in operation from 1919 to 1933, when it closed down due to pressure from the Nazi regime. It is with the cooperation of the Bauhaus archives that the original source material from the school became available for renewed typographic study. Today we’re delighted that the Hidden Treasures collaboration has resulted in a new life for the letter designs that were left as one-off or incomplete projects so many years ago.

A Quick Roundup of Recent React Chatter

Css Tricks - Tue, 06/12/2018 - 7:37am

Like many, many others, I'm in the pool of leveling up my JavaScript skills and learning how to put React to use. That's why Brad Frost resonated with me when he posted My Struggle to Learn React."

As Brad does, he clearly outlines his struggles point-by-point:

  • I have invested enough time learning it
  • React and ES6 travel together
  • Syntax and conventions
  • Getting lost in this-land
  • I haven’t found sample projects or tutorials that match how i tend to work
  • I'm less competent at JS than HTML and CSS

It seems that Brad's struggles resonated with others as well, inspiring empathy and help from the community. For example, Kevin Ball touches on the second and third frustrations by supplying a distinction between React and ES6 and examples of the syntax and conventions of each:

For each feature, I show a couple examples of what it might look like, identify where it is coming from, give you a quick overview of what is called and what it does, and link off to some resources that can help you learn about it.

Super awesome!

Shortly following Brad's post was this tweet from Sara Soueidan:

I’m just gonna throw this bomb here:

React is the new jQuery

There you go.

— Sara Soueidan (@SaraSoueidan) May 24, 2018

You know that lit up the Twitterverse. Yes, it's provocative, but the sentiment is pretty clean cut as she clarified a little later:

I used to LOVE jQuery, but hated how it was overused even when it was completely unneeded and unnecessary and, dare I say, sometimes harmful.

I hope this clarifies my controversial tweet from this afternoon. ;)

— Sara Soueidan (@SaraSoueidan) May 24, 2018

Speaking of jQuery, Sarah Drasner had written a post a little while ago that showed how Vue can be used as a jQuery replacement and requires no build process at all. Well, the same can be true of React, despite the fact that both frameworks are predominantly used in complex app environments.

And, if all this talk about moving away from jQuery and into complex app environments sounds scary, then maybe this interview with Bruce Lawson will be reassuring to you. After all:

The end user doesn't care whether your website is made with React or Angular or webpack or Broccoli or Grunt or whatever. They just want it to work in their damn browser.

But, still, there may be circumstances where React will be the right tool for the job and you'll want it in your toolbox. For example, WordPress is using it as the basis for it's upcoming Gutenberg editor meaning WordPress developers (and that's a lot of us) will want to heed Matt Mullenweg's advice to "learn JavaScript deeply." Our guide on developing for Gutenberg might be a great place for you to start that journey.

All in a day's work, right?!

The post A Quick Roundup of Recent React Chatter appeared first on CSS-Tricks.

Linkbait 41

QuirksBlog - Tue, 06/12/2018 - 5:28am

Friends edition. Lots of articles by people I’ve known for ages. Not sure why; probably just a coincidence.

  • The Big Z deplores the cult of the complex.

    in a field where young straight white dudes take an overwhelming majority of the jobs (including most of the management jobs) it’s perhaps to be expected that web making has lately become something of a dick measuring competition.

    Before you diss him (and me) as an old fart who isn’t keeping up with the times, please consider the following question: At which time can we start to safely say that people who just cram frameworks into everything they make are too set in their ways and can’t keep up with the latest trends? Two years? Three? Five?
  • Brad takes a middle position between those who applaud the shiny new and those who deplore it, by asking (rather testily? or is that just my imagination?) why both sides treat a simple “I don’t understand X” as fodder for their view of web development. (I am guilty as charged, I’m afraid.)
  • Jeremy hopes AMP will drive itself to extinction.

    If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

    and

    I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

  • Rachel wrote a massive guide to CSS layout. I’ll have to study it closely if I ever write the CSS book for JavaScripters. I did not know about display: flow-root.
  • Ethan is a little excited about Safari (or, at least, WebKit) coming to the Apple Watch. So am I. It’ll be interesting to see how they solved the low-memory and small-screen issues. Ethan’s article contains a lot of useful links.
  • I’m not excited about yet another meta tag, though — see Erik Runyon’s article for the details. I wish we could have left it at the existing one, but of course web designers didn’t make their old sites fit for 272px, which appears to be the ideal layout viewport width of the smallest watch.
  • Tim adds some performance notes:

    The median site sends about 351kb of compressed JavaScript to “mobile” devices according to HTTP Archive. That’s roughly 1.7-2.4MB of uncompressed JavaScript the browser has to parse, compile, and execute. That little S3 processor is going to struggle if we try to serve anything close to the amount of JavaScript that we serve to everything else.

    Use AMP? (Just kidding)
    We can hope that this will drastically drop average JS usage, but it probably won’t.
  • The inimitable Lin Clark wrote cartoon introductions to DNS over HTTPS and ES modules.
  • A very useful overview of current VR sets, including their browsers and WebVR support.
  • Speaking of which, Tesla updated its browser. It’s not a cutting-edge one, judging by the HTML5 Tests screenshots, but I can see why disabling video in a car browser might be a good idea.
  • Have a tip for the next Linkbait? Or a comment on this one? Let me know (or here or here).

Creating a Bar Graph with CSS Grid

Css Tricks - Tue, 06/12/2018 - 4:21am

If you’re looking for more manageable ways to create bar graphs, or in search of use cases to practice CSS Grid layout, I got you!

Before we begin working on the graph, I want to talk about coding the bars, when Grid is a good approach for graphs, and we’ll also cover some code choices you might consider before getting started.

Preface

The bar is a pretty basic shape: you can control its dimensions with CSS width, height, number of grid or table cells, etc. depending on how you’ve coded it. As far as graphs go, the main thing we want to control is the height of the bars in the graph.

Controlling height with Grid cells (like here) is convenient for designs where the height is incremental by a fixed value — no in-betweens. For example, signal bars in phones or when you don’t mind setting a lot of grid rows to better control the bar height down to its smallest value, like IRL graph paper.

For my graph, I want gradient bars as well as vertical and horizontal axes labels. So, to make it easy, I have decided to control the bar height with gradient sizing, and determine the number of grid rows based on the number of vertical axis labels I want.

Also, other than the contents for the graph — bars, axes labels, and captions — there’ll be no data present in the HTML, like data about bar colors and dimensions.

data-* attributes are used to provide that sort of information in HTML. But I didn’t want to switch back and forth between HTML and CSS while coding, and decided to completely separate the content from the design. It’s totally up to you. If you feel like using data-* might benefit your project, go for it.

I’ve created a diagram below that you might find useful to refer to while reading the code. It depicts the graph and the grid that contains it. The numbers represent grid lines.

Let’s code this thing.

The HTML

Grid can automatically place items in top-bottom and left-right directions. To take advantage of that, I’m going to add the graph contents in the order y-axis labels (top-bottom), bars, and x-axis labels (left-right). This way, I only need to write the HTML markup and the CSS will place the bars for me!

<figure aria-hidden="true"> <div class="graph"> <span class="graphRowLabel">100</span> <span class="graphRowLabel">90</span> <span class="graphRowLabel">80</span> <span class="graphRowLabel">70</span> <span class="graphRowLabel">60</span> <span class="graphRowLabel">50</span> <span class="graphRowLabel">40</span> <span class="graphRowLabel">30</span> <span class="graphRowLabel">20</span> <span class="graphRowLabel">10</span> <div class="graphBar"></div> <div class="graphBar"></div> <div class="graphBar"></div> <div class="graphBar"></div> <div class="graphBar"></div> <span><sup>Y </sup>&frasl;<sub> X</sub></span> <span class="graphColumnLabel">&#x1f60a;</span> <span class="graphColumnLabel">&#x1f604;</span> <span class="graphColumnLabel">&#x263a;&#xfe0f;</span> <span class="graphColumnLabel">&#x1f601;</span> <span class="graphColumnLabel">&#x1f600;</span> </div> <figcaption>Made with CSS Grid &#x1f49b;</figcaption> </figure> <span class="screenreader-text">Smiling face with squinting eyes: 10%, grinning face with squinting eyes: 65%, smiling face: 52%, grinning face with smiling eyes: 100%, and grinning face: 92%.</span>

Note: If you’re interested in accessibility, know that I’m not an accessibility expert. But when I tried to make the bars accessible, screen reader experience simply sucked. Using aria-labelledby wasn’t that good either. So, I added a text description of the graph and hid it from the visual display. That made the reading much more natural.

The CSS

This is where the magic happens.

/* The grid container */ .graph { display: grid; grid: repeat(10, auto) max-content / max-content repeat(5, auto); /* ... */ }

We’ve defined eleven rows and six columns in our grid with these two little lines of CSS: ten automatically sized rows and one sized to its "maximum content"; one column sized to its "maximum content" and five automatically sized. CSS Grid is a beautiful thing.

The graph bars need to cover the grid from the first row to the second-to-last row since we are using the last one for the x-axis labels. I gave the bars 100% height, and grid-row: 1 / -2; which means "span the items from first horizontal grid line till the second last."

/* A grid item */ .graphBar{ height: 100%; grid-row: 1 / -2; }

The bars also have linear gradient going upwards. The size of the colored portion of the gradient is the indicator of bar’s height, which in turn is taken from each bar’s own CSS rule as a custom variable.

/* A grid item */ .graphBar{ /* Same as before */ background: palegoldenrod linear-gradient(to top, gold var(--h), transparent var(--h)); }

To control the width of the bars and the space between them, I use a fixed width and centered them with justify-self: center;. You can instead use grid-column-gap to create gaps between columns if you want. Here’ the full code that pulls everything for the bars together:

/* A grid item */ .graphBar{ height: 100%; grid-row: 1 / -2; background: palegoldenrod linear-gradient(to top, gold var(--h), transparent var(--h)); width: 45px; justify-self: center; }

Did you notice the CSS variable (var(--h)) in there? We need to specify the exact height of each bar and we can use the variable to determine the height of the background gradient in terms of percentage:

.graphBar:nth-of-type(1) { grid-column: 2; --h: 10%; } .graphBar:nth-of-type(2) { grid-column: 3; --h: 65%; } .graphBar:nth-of-type(3) { grid-column: 4; --h: 52%; } /* and so on... */

That’s it! after styling, the graph looks like this:

See the Pen CSS Bar Graph with Grid by Preethi (@rpsthecoder) on CodePen.

There are a few demo-specific styles in here but everything we’ve covered so far will get you the basic framework for a bar graph. The y-axis labels I created are positioned on top of the grid lines for a slightly cleaner layout. I got the cylindrical shape and the cross-section edges of the bars by using border-radius and elliptic pseudo elements, respectively. Without them, you’ll get a straight up rectangular bar.

The post Creating a Bar Graph with CSS Grid appeared first on CSS-Tricks.

??Build live comments with sentiment analysis using Nest.js

Css Tricks - Tue, 06/12/2018 - 4:18am

(This is a sponsored post.)

Interestingly, one of the most important areas of a blog post is the comment section. This plays an important role in the success of a post or an article, as it allows proper interaction and participation from readers. This makes it inevitable for every platform with a direct comments system to handle it in realtime.

In this post, we’ll build an application with a live comment feature. This will happen in realtime as we will tap into the infrastructure made available by Pusher Channels. We will also use the sentiment analysis to measure whether comments are positive or negative, and display this information on an admin panel.

Direct Link to ArticlePermalink

The post ??Build live comments with sentiment analysis using Nest.js appeared first on CSS-Tricks.

Versioning Interview

Css Tricks - Mon, 06/11/2018 - 7:45am

Adam Roberts (who you might recognize from our interview with him), interviewed me for the Versioning newsletter. I'm publishing my answers here for y'alls perusal as well!

Which dev/tech idea or trend excites you the most at the moment, and why?

I love that new JavaScript has arrived. I don’t know if "new JavaScript" is really the word for it, but that’s what it feels like. Major syntax improvements coupled with state and component-based thinking, coupled with powerful frameworks tying it all together: React, Angular, Vue, Ember, etc. Plus the ecosystem they live in, which often includes ES6+ processing, building/bundling, state management tools, and more.

Particularly impressive are tools like Create React App that get you cooking on a whole fancy setup like that in seconds. Vue CLI is similarly amazing.

Combined with serverless / JAMstack stuff, I love it all the more.

This stuff is so thick in the air right now it’s definitely not going away, like it or not. It will evolve, but this whole thing that I call New JavaScript (for lack of a better overall term) is gonna be around for a hot while.

Which dev/tech idea or trend is overrated, and why?

I’d hate to crap on any particular idea. For one, I’m not sure it does anybody any good. But also, I’m so often wrong about stuff like this. It reminds me of my track record guessing if a startup is a good idea or not. It seems like if I find myself rolling my eyes at a startup it will get huge, and if I think it’s absolutely amazing it’ll be dead in a year.

Describe (or link to!) something cool you built, designed or produced recently. Why is it cool, why are you proud of it?

Just recently I built a little microsite about Serverless technology. I’m far from an expert, but that’s part of why it’s cool in a way. It’s what I’ve been doing my whole career. I’m a beginner about this Serverless stuff, and that’s sometimes when it’s the best time to write about something, because your empathy is at a maximum for other people that don’t get it.

The point of the site is explaining why serverless is useful even for front-end developers, all the zillion services that are a part of the serverless world, plus ideas and resources.

How did you build it?

Using CodePen! It’s a static site (Of course! Serverless!), but I still wanted to work in a componentized way. That ended up being perfect for Nunjucks, which is a technology we support on CodePen Projects. I wrote about the whole process on Smashing Magazine.

How did you find yourself interested in this stuff?

One thing that happened is that we jumped on some serverless technology at CodePen. We have some pretty perfect use cases for serverless functions, like using them for preprocessing. For us, they are fast, easy to set up, easy to maintain and less expensive than spinning up our own servers. Not to mention less expensive.

I’m no expert though. I just think it’s fascinating to watch as someone who likes to watch what’s happening in our industry. It’s part of my job too, really, as I write and talk about the web pretty regularly. One reason I’m excited about them is because a front-end developer with some JavaScript skills can take advantage of them to do things they might not even realize they can do. That’s kind of a big deal!

Joining the dots between the component-based thinking and serverless tech, has this changed how you think people should develop a project?

I’m always hesitant to tell people how to build their stuff. Although I probably don’t do as good a job at that as I want to. I think a better tone is to explain what works for you and why and let people digest that how they will.

Some trends are impossible to ignore though. Component-based thinking is likely here to stay. It seems like one of those ideas that is just a result of our industry maturing into patterns that help everyone. Plus it’s abstract enough that the concept lives on beyond any particular implementation.

And has it changed the paths you’d suggest for someone learning web dev?

If you want one simple message, it’s JavaScript. I think you’ll turn out pretty OK if you dig way into that right now.

What’s the best tech-related thing you watched recently, and why?

I’m far from unique here, but I quite love documentaries. I feel like documentaries have a strong chance of influencing how you think. If you bring up a documentary at a dinner party, and others have seen it, it always lights up the conversation. I really enjoyed Wild Wild Country recently. I liked Fishpeople as a easy quicky inspirational documentary snackfood. I’ve seen Home Movie about a million times as that one just really resonates with (it’s about people who live in really weird houses.) If you wanna dig into a truly strange and interesting world created by one person, try Marwencol.

And finally, what was the funniest or interesting off-topic link you’ve sent to a friend recently?

Did y’all see this one-minute animated ad about the changing job market for University of Phoenix? Incredibly well done, like a dang Pixar short. It’s powerful, but of course it stirs up mixed feelings. For one, University of Phoenix doesn’t have a stellar reputation, so I’m slightly dubious that if you wanna up and change your life that that is the best way to do it.

More interestingly though, while factory jobs are being lost at a faster clip, our own industry is also worried about automation. It would be bitter irony indeed to leave a factory job, get an education, change careers, only to land in a new job that is also shortly lost to automation.

The post Versioning Interview appeared first on CSS-Tricks.

Digging Into React Context

Css Tricks - Mon, 06/11/2018 - 3:24am

You may have wondered lately what all the buzz is about Context and what it might mean for you and your React sites. Before Context, when the management of state gets complicated beyond the functionality of setState, you likely had to make use of a third party library. Thanks to recent updates by the awesome React team, we now have Context which might help with some state management issues.

What Does Context Solve?

How do you move the state from a parent component to a child component that is nested in the component tree? You know that you can use Redux to manage state, but you shouldn’t have to jump to Redux in every situation.

There's a way to do this without Redux or any other third party state management tool. You can use props!

Say the feature you want to implement has a tree structure similar to what I have below:

The state lives in the App component and is needed in UserProfile and UserDetails components. You need to pass it via props down the tree. If the components that need this state are 10 steps deep, this can become tedious, tiring, and error prone. Each component is supposed to be like a black box — other components should not be aware of states that they do not need. Here is an example of an application that matches the scenario above.

class App extends React.Component { state = { user: { username: 'jioke', firstName: 'Kingsley', lastName: 'Silas' } } render() { return( <div> <User user={this.state.user} /> </div> ) } } const User = (props) => ( <div> <UserProfile {...props.user} /> </div> ) const UserProfile = (props) => ( <div> <h2>Profile Page of {props.username}</h2> <UserDetails {...props} /> </div> ) const UserDetails = (props) => ( <div> <p>Username: {props.username}</p> <p>First Name: {props.firstName}</p> <p>Last Name: {props.lastName}</p> </div> ) ReactDOM.render(<App />, document.getElementById("root"));

We are passing the state from one component to another using props. The User component has no need of the state, but it has to receive it via props in order for it to get down the tree. This is exactly what we want to avoid.

See the Pen React Context API Pen 1 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Context to the Rescue!

React’s Context API allows you to store the state in what looks like an application global state and access it only in the components that need them, without having to drill it down via props.

We start by initializing a new Context using React's createContext()

const UserContext = React.createContext({}) const UserProvider = UserContext.Provider const UserConsumer = UserContext.Consumer

This new Context is assigned to a const variable, in this case, the variable is UserContext. You can see that there is no need to install a library now that createContext() is available in React (16.3.0 and above).

The Provider component makes the context available to components that need it, which are called Subscribers. In other words, the Provider component allows Consumers to subscribe to changes in context. Remember that the context is similar to a global application state. Thus, components that are not Consumers will not be subscribed to the context.

If you are coding locally, your context file will look like this:

import { createContext } from 'react' const UserContext = createContext({}) export const UserProvider = UserContext.Provider export const UserConsumer = UserContext.Consumer The Provider

We'll make use of the Provider in our parent component, where we have our state.

class App extends React.Component { state = { user: { username: 'jioke', firstName: 'Kingsley', lastName: 'Silas' } } render() { return( <div> <UserProvider value={this.state.user}> <User /> </UserProvider> </div> ) } }

The Provider accepts a value prop to be passed to it Consumer components descendants. In this case, we will be passing the user state to the Consumer components. You can see that we are not passing the state to User component as props. That means we can edit the User component and exclude the props since it does not need them:

const User = () => ( <div> <UserProfile /> </div> ) The Consumer

Multiple components can subscribe to one Provider component. Our UserProfile component needs to make use of the context, so it will subscribe to it.

const UserProfile = (props) => ( <UserConsumer> {context => { return( <div> <h2>Profile Page of {context.username}</h2> <UserDetails /> </div> ) }} </UserConsumer> )

The data we injected into the Provider via the value prop is then made available in the context parameter of the function. We can now use this access the username of the user in our component.

The UserDetails component will look similar to the UserProfile component since it is subscriber to the same Provider:

const UserDetails = () => ( <div> <UserConsumer> {context => { return ( <div> <p>Userame: {context.username}</p> <p>First Name: {context.firstName}</p> <p>Last Name: {context.lastName}</p> </div> ) }} </UserConsumer> </div> )

See the Pen React Context API Pen 2 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Updating State

What if we want to allow users to change their first and last name? That's also possible. Consumer components can re-render whenever there are changes to the value passed by the Provider component. Let's see an example.

We'll have two input fields for the first and last name in the consumer component. From the Provider component, we will have two methods that update the state of the application using the values entered in the input fields. Enough talk, let's code!

Our App component will look like this:

class App extends React.Component { state = { user: { username: 'jioke', firstName: 'Kingsley', lastName: 'Silas' } } render() { return( <div> <UserProvider value={ { state: this.state.user, actions: { handleFirstNameChange: event => { const value = event.target.value this.setState(prevState => ({ user: { ...prevState.user, firstName: value } })) }, handleLastNameChange: event => { const value = event.target.value this.setState(prevState => ({ user: { ...prevState.user, lastName: value } })) } } } }> <User /> </UserProvider> </div> ) } }

We are passing an object which contains state and actions to the value props which the Provider receives. The actions are methods that will be triggered when an onChange event happens. The value of the event is then used to update the state. Since we want to update either the first name or last name, there’s a need to preserve the value of the other. For this, we make use of ES6 Spread Operator, which allows us to update the value of the specified key.

With the new changes, we need to update UserProfile component.

const UserProfile = (props) => ( <UserConsumer> {({state}) => { return( <div> <h2>Profile Page of {state.username}</h2> <UserDetails /> </div> ) }} </UserConsumer> )

We use ES6 destructuring to extract state from the value received from the Provider.

For the UserDetails component, we both the state and actions. We also need to add two input fields that will listen for an onChange() event and call the corresponding methods.

const UserDetails = () => { return ( <div> <UserConsumer> {({ state, actions }) => { return ( <div> <div> <p>Userame: {state.username}</p> <p>First Name: {state.firstName}</p> <p>Last Name: {state.lastName}</p> </div> <div> <div> <input type="text" value={state.firstName} onChange={actions.handleFirstNameChange} /> </div> <div> <input type="text" value={state.lastName} onChange={actions.handleLastNameChange} /> </div> </div> </div> ) }} </UserConsumer> </div> ) } Using Default Values

It is possible to pass default values while initializing Context. To do this, instead of passing an empty object to createContext(), we will pass some data.

const UserContext = React.createContext({ username: 'johndoe', firstName: 'John', lastName: 'Doe' })

To make use of this data in our application tree, we have to remove the provider from the tree. So our App component will look like this.

class App extends React.Component { state = { user: { username: 'jioke', firstName: 'Kingsley', lastName: 'Silas' } } render() { return( <div> <User /> </div> ) } }

See the Pen React Context API Pen 4 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

The data that will be used in the Consumer components will be done defined when we initialized a new Context.

In Conclusion

When things get complicated, and you are tempted to run yarn install [<insert third-party library for state management], pause for a second — you’ve got React Context at the ready. Don't you believe me? Maybe you'll believe Kent C. Dodds.

The post Digging Into React Context appeared first on CSS-Tricks.

Creating a VS Code Theme

Css Tricks - Fri, 06/08/2018 - 3:13am

Everyone has special and perhaps, particular, tastes when it comes to their code editor. There are literally thousands of themes out there, and for good reason: a thing of beauty and enhancement to productivity for one can be a hindrance to another.

It’s been an item on my bucket list to create my own theme. I was coding very late the one night, well into the small hours of the morning. Everyone in my house was sleeping and so, as usual, the only light was the glow of my screen. I know it’s not necessarily healthy to code like this, but it’s literally the time I’m most productive: there are minimal distractions, I’m not dealing with work stuff, family stuff, friend stuff, or puppy stuff. I can focus.

I had some preferences set for the theme I had been using and, though they all worked well for daytime or plane rides, I always felt like something was missing for late night coding sessions. I decided it was time to craft my own theme.

We'll talk first about the general process for creating a theme in case you'd like to create one of your own, and then we'll dive into some of the research and testing that went into mine in particular to peek into the process.

Fire It Up

Before you do anything, you’re going to install vsce (short for Visual Studio Code Extensions) and establish yourself as a publisher. All of the instructions to do so are here. I know it looks like a lot, but it takes anywhere from 5-10 minutes, and then you'll never have to do it again, for any extension you create.

Now that you’ve got that under your belt, here are the steps you need to start work.

First, you need to run:

npm install -g yo generator-code

This makes the generator globally available on your machine (meaning you can now create a theme in any directory). You can then execute this command to kick off your theme:

yo code

You will be prompted by a screen that looks like this:

Note that I’ve used the arrows here to navigate to the “New Color Theme” option. Note also that this is how you’d want to make any other extension.

When selecting this, it asks if this is a new theme or if we want to import from an existing one. We want to create a new one.

Next, you'll have to answer a few other questions, including:

  • What’s the extension's name?
  • What is the the identifier? (I just went with the name, that’s probably typical.)
  • What is the the description? (I just put something silly in initially. Don’t worry, you can update this in your package.json in the future.)
  • What's the publisher's name? (See earlier instructions.)
  • What name should be shown to the user? (I used the same as the extension name.)
  • Is this theme dark, light, or high contrast?

It will set you up with a base theme to start skinning your color preferences. The full scoop and all the details are here. More details about themes in general are here.

Test Drive

We have our base theme and we have some concepts for the palette. So, how do we test it out? When you open the directory with your theme, you can press fn + f5 on Mac (or just f5 on Windows) and a new window immediately pops open where you can test your theme! You’ll see in the original theme window that you now have a little control panel where you can reload, pause, and stop. Don’t forget to save before you do!

OK, now that you have the other window open, hit Command + Shift + P to get the command explorer. In there type, "Developer: Inspect TM Scopes" and you’ll see a prompt come up that allows you to look through all the tags and attributes: it will tell you their color, their font styles, and how you need to target it.

There is one problem, though. There are a lot of things in the editor you can’t target because VS Code will interpret that as you trying to drive the rest of the editor (i.e. the file viewer, the terminal, and the search boxes). Here are the two ways I found to figure out the rest of the scopes:

  • This page is extremely helpful in understanding some of the base things you need to configure. In fact, you might want to start with some of these.
  • There are DevTools! You can open them the same way you do with Chrome: Command + Option + I. What I did was look for the color in the computed styles and look them up in the text editor to target them. You’ll notice that the default in the DevTools is RGBA, so you will have to Shift + click on the color to change it’s format until you get to the equivalent hex values. I could then scan through the matching colors in my theme json until I found the matching value and change it.
Another Small Tip!

When I first started to develop the theme, I thought I would try forking someone else's theme as a starting point. I tried out Wes Bos' Cobalt Two. Though I didn't end up using it, one thing he had that I found valuable was a demos directory with examples of all of a whole slew of different languages. I started by moving his over, but realized quickly that the files weren’t long enough for my testing needs. So I created my own. In the course of correcting issues people filed, I also created a React stateless functional component example, a Ruby example, and of course I created a .vue single file component &#x1f600; This is also helpful in maintenance because if people are seeing an issue on a file type I previously didn't test on, they can PR the file into the demos directory, and I can target what they're seeing. It makes duplication and testing really simple.

Research

Research for a code theme? Isn’t that over the top? Probably! But I was genuinely curious: what would work best for legibility for the vast majority of people, while still being something I liked?

Color and contrast

The first step was considering accessibility. I always liked how solarized themes made legibility a central theme to their palettes. I read about color retention and accessibility, and it turns out that men have a really high incidence of colorblindness (around 8% for men, 1% for women). The majority of programmers are men, so even though I am not colorblind, it was a no-brainer to craft the theme at least partially around including those with colorblindness. The most typical is red/green deficiency so I found a few good ways to test with my favorite being, funnily enough, a bit manual.

I originally started by testing random images to see if I could discern a pattern that I could match. One thing I noticed while testing was that complementary colors seemed to perform the best across all tests. However, if three colors needed to be tested at once, a triad color composition also produced good results.

If you’re unfamiliar with color relationships, Adobe Color CC (previously Kuler) makes it easier to visualize and you can even create a color palette directly in the editor.

It's extremely important to know that a color is only a color in reference to another color. This is part of what makes crafting a color theme so difficult. Color isn't static, it's all about relationships. You’re probably a little familiar with this in terms of accessibility. A light green on a black may be accessible, but when you change it to a white background it no longer is.

Accessibility in color can be measured with a number of tools. Here are some of my favorites:

It’s also really nice to set up your palette for accessibility from the start. Color Safe is a great tool that helps with that.

I cover more details about color and perception in this post: A Nerd's Guide to Color on the Web.

Colors and Reading Comprehension

Another piece of this was learning which colors, if any, had an effect on reading comprehension. In some studies, it’s been shown that black text on white background, such as used in some light themes, can be difficult for comprehension. The theory is that the use of overlays to change the text color has improved cognitive awareness for many, especially those with dyslexia and autism. However, these studies are controversial and it's inconclusive whether the overlays are effective in comprehension or a preference.

There is a syndrome called Irlen, or Scotopic Sensitivity Syndrome (SSS) that is known to interfere with the ability to discern letters and words. It is a visual perception disability on the magnocellular level — the visual pathway that can help with scanning and comprehension. This has been thought to be connected to — reading with impairment under certain types of light, and some think it can affect up to 50% of people (again, this figure is controversial and inconclusive).

We’re still learning about SSS, but there are some studies that color overlays can help focus attention to the text and reduce eyestrain. The colors found to increase readability and contrast so far for those with SSS have been beige, goldenrod, green, pink and blue. Blue has shown thus far to have the strongest link for people with Reading Disability and Attention Deficit/Hyperactivity Disorder.

Despite the fact that these studies haven't reached statistical signifigance from what I can gather, I couldn't find evidence that there was any harm in following them, and it seemed safest to keep them in mind while developing the theme. I chose a dark theme with blue as the primary color and used the other colors that tested well in supportive and contrasting roles throughout the theme.

Other Theme Inspiration

There were a few palettes I looked at for inspiration. For example, I did an exploratory study into what kind of tone I wanted.

  • Palenight Material: the reds and purples in my theme started with this one, and I adjusted the purples values.
  • Dracula: this theme's base was a bit darker and provided contrast to the pastels I wanted in my theme.
  • Panda: I borrowed the turquoise color and adjusted it a bit.

I also looked at the work of Maggie Appleton quite a bit. I especially like her work on Egghead.io, which is amazing on every level.

Those greens and oranges are where I started with my palette. I made adjustments while working on accessibility. The blackest blue-black that’s in the lower right of the image became the base of my main background.

Decisions, Decisions

There were a lot of decisions to make at this point. Thankfully, my research was done. Remember, I wanted pastels, like the ones used in the Material Palenight, Panda and Dracula Dark themes. Specifically, I wanted to use beige, goldenrod, green, pink and blue based on what I had read in the research phase. But the most important part to me was contrast across the color spectrum. That’s what I felt some of the other themes lacked, even if they nailed the colors.

I went to work, creating blue and a golden color as the base standard for working across the color spectrum.

I used purple for keywords that are informative but I didn’t want to call out as strongly — if you’re trying to create contrast, you also need to consider what colors to make subtle so that it gives attention to what’s most important. If everything is important, nothing is. I also wanted to offset the fact that the purple had a shallow contrast by making it different in some other way. I did this with the use of italics. Some people like that, some hate it. I decided to buy a font called Dank Mono, similar to Operator Mono, or Fira Code (the latter being the free open source version), partially because I enjoyed the presentation of the italic glyphs. They also have font ligatures, which can be quite stylish. You set them in your user preferences with "editor.fontLigatures": true. Some people aren't super into the italics, though, so I created a no italic version that people could switch to if it bothered them.

I wanted to call out state/data strongly because it tends to be important for me when when scanning code. I started with red because I had seen that in many other themes, but I couldn’t get away from the fact that my eyes would only go there and the fact that red is often associated with error states. So, instead, I used the strongest color against the background I had chosen, white, and italicized it to offset it even more. It also has the benefit of being a midpoint between blue and gold. I saved the red/orange distinction for React components, which needed to have some separation from the standard HTML elements.

Contrast is a zero-sum game: if everything is important, nothing is. I tried my best to be sure things that were conceptually similar or could fall back, did so that strong contrast was intentional and everything didn't turn into a rainbow, because that hurts your ability to scan the document.

One such decision was to keep the sidebar contrast low in order to keep the focus on the editor. I found that if I tried to bring the contrast up in other parts of my editor, my eyes actually began to hurt. This can be a challenging thing about some accessibility- because not all humans are the same, things like color and font can become a spectrum rather than a hard rule.

After running a lot of tests, the compromise I decided on was to keep the theme to what I myself could use without strain, and update the readme with the preferences I would recommend for someone who was different from me and needed the contrast levels to be higher. If you go into your user settings in VS Code (Code > Preferences > Settings), in the righthand pane, you can add your own customizations. With the help of some people in the community who filed issues because they wanted this feature, we arrived at these possible color preference updates for those who need the contrast:

"workbench.colorCustomizations": { "activityBar.background": "#000C1D", "activityBar.border": "#102a44", "sideBar.background": "#001122", "sideBar.border": "#102a44", "sideBar.foreground": "#8BADC1" },

You can actually drop any colors in here, this was just a suggestion for a starting point based on the existing theme colors. These workbench color customizations are really handy, they allow people to use a theme, and then make small tweaks as they feel they need them. If you're using a theme and it's allllmost perfect but not quite, you can always make small changes this way.

There were hundreds of other small decisions I made over the course of creating it (and am still making now that I'm maintaining it), but after I had made a good amount of tweaks, I would check my work against the colorblind simulator. It wasn’t terribly easy getting it to work right in every language for every setting, but I did my best. This is where that demo folder came in really handy. Now that it's launched, if someone needs particular language support, I can encourage them to PR the folder so that I can support it.

Here's an example Angular file:

...and here are some of the tests I ran to determine if there was enough contrast. Remember, what I was looking for was contrast across the color spectrum for meaningful distinctions, and slight contrast for things that require less attention:

It took a good amount of tests to get something that didn't become monochrome, especially across languages. The amount of color combinations possible are a bit endless, and it's pretty difficult to make something that works perfectly in every scenario. That's why I spent a lot of time crafting the demo folder and making small tweaks to try to cover as much ground as possible.

Bugfixes

I launched it! Everyone party! &#x1f389;

The most helpful thing to me were the contributions of people using the theme and letting me know their pain points by logging issues in the GitHub repo. It’s hard to see every failure scenario across a theme, and have so far had 16 subsequent releases to fix over 50 bugs so far, some with help from the community. The more people who let me know what they’re seeing, the better the theme gets. Not everything gets in of course — there are times when people want things that conflict with other requests, so I have to make a judgement call in some cases. Still, this this is rare and the majority of feedback so far have been very clean-cut and actionable.

That’s it! If you'd like to check out the theme, it's available here for free. I hope you found this useful, either for background into the theme and the decisions that were made, or for a process in creating your own.

The post Creating a VS Code Theme appeared first on CSS-Tricks.

World wide wrist

Css Tricks - Thu, 06/07/2018 - 7:48am

After all the hubbub with WWDC over the past couple of days, Ethan Marcotte is excited about the news that the Apple Watch will be able to view web content.

He writes:

If I had to guess, I’d imagine some sort of “reader mode” is coming to the Watch: in other words, when you open a link on your Watch, this minified version of WebKit wouldn’t act like a full browser. Instead of rendering all your scripts, styles, and layout, mini-WebKit would present a stripped-down version of your web page. If that’s the case, then Jen Simmons’s suggestion is spot-on: it just got a lot more important to design from a sensible, small screen-friendly document structure built atop semantic HTML.

But who knows! I could be wrong! Maybe it’s a more capable browser than I’m assuming, and we’ll start talking about best practices for layout, typography, and design on watches.

I had this inkling for a long while that there wouldn’t ever be a browser in the Watch due to its constraints, but instead I hoped that there might be a surge of methods to read web content aloud via some sort of voice interface. "Siri, read me the latest post from James’ blog," is probably nightmare fuel for most people but I was sort of holding out for devices like this to access the web via audio.

Another interesting aside is that both Safari OSX and iOS have had a reader mode for a long time now, but have it as an option enabled by the user while viewing the content. Bypassing the user-enabled option would be the difference on watchOS and where our structured, semantic chops are put to task.

Direct Link to ArticlePermalink

The post World wide wrist appeared first on CSS-Tricks.

Manipulating Pixels Using Canvas

Css Tricks - Thu, 06/07/2018 - 3:27am

Modern browsers support playing video via the <video> element. Most browsers also have access to webcams via the MediaDevices.getUserMedia() API. But even with those two things combined, we can’t really access and manipulate those pixels directly.

Fortunately, browsers have a Canvas API that allows us to draw graphics using JavaScript. We can actually draw images to the <canvas> from the video itself, which gives us the ability to manipulate and play with those pixels.

Everything you learn here about how to manipulate pixels will give you a foundation to work with images and videos of any kind or any source, not just canvas.

Adding an image to canvas

Before we start playing with video, let’s look at adding an image to canvas.

<img id="SourceImage" src="image.jpg"> <div class="video-container"> <canvas id="Canvas" class="video"></canvas> </div>

We created an image element that represents the image that is going to be drawn on the canvas. Alternatively we could use the Image object in JavaScript.

var canvas; var context; function init() { var image = document.getElementById('SourceImage'); canvas = document.getElementById('Canvas'); context = canvas.getContext('2d'); drawImage(image); // Or // var image = new Image(); // image.onload = function () { // drawImage(image); // } // image.src = 'image.jpg'; } function drawImage(image) { // Set the canvas the same width and height of the image canvas.width = image.width; canvas.height = image.height; context.drawImage(image, 0, 0); } window.addEventListener('load', init);

The code above draws the whole image onto the canvas.

See the Pen Paint image on canvas by Welling Guzman (@wellingguzman) on CodePen.

Now we can start playing with those pixels!

Updating the image data

The image data on the canvas allows us to manipulate and change the pixels.

The data property is an ImageData object with three properties — the width, height and data/ all of which represent those things based on the original image. All these properties are readonly. The one we care about is data, n one-dimensional array represented by an Uint8ClampedArray object, containing the data of each pixel in a RGBA format.

Although the data property is readonly, it doesn’t mean we cannot change its value. It means we cannot assign another array to this property.

// Get the canvas image data var imageData = context.getImageData(0, 0, canvas.width, canvas.height); image.data = new Uint8ClampedArray(); // WRONG image.data[1] = 0; // CORRECT

What values does the Uint8ClampedArray object represent, you may ask. Here is the description from MDN:

The Uint8ClampedArray typed array represents an array of 8-bit unsigned integers clamped to 0-255; if you specified a value that is out of the range of [0,255], 0 or 255 will be set instead; if you specify a non-integer, the nearest integer will be set. The contents are initialized to 0. Once established, you can reference elements in the array using the object's methods, or using standard array index syntax (that is, using bracket notation)

In short, this array stores values ranging from 0 to 255 in each position, making this the perfect solution for the RGBA format, as each part it is represented by 0 to 255 values.

RGBA colors

Colors can be represented by RGBA format, which is a combination of Red, Green and Blue. The A represents the alpha value which is the opacity of the color.

Each position in the array represents a color (pixel) channel value.

  • 1st position is the Red value
  • 2nd position is the Green value
  • 3rd position is the Blue value
  • 4th position is the Alpha value
  • 5th position is the next pixel Red value
  • 6th position is the next pixel Green value
  • 7th position is the next pixel Blue value
  • 8th position is the next pixel Alpha value
  • And so on...

If you have a 2x2 image, then we have a 16 position array (2x2 pixels x 4 value each).

The 2x2 image zoomed up close

The array will be represented as shown below:

// RED GREEN BLUE WHITE [ 255, 0, 0, 255, 0, 255, 0, 255, 0, 0, 255, 255, 255, 255, 255, 255] Changing the pixel data

One of the quickest things we can do is set all pixels to white by changing all RGBA values to 255.

// Use a button to trigger the "effect" var button = document.getElementById('Button'); button.addEventListener('click', onClick); function changeToWhite(data) { for (var i = 0; i < data.length; i++) { data[i] = 255; } } function onClick() { var imageData = context.getImageData(0, 0, canvas.width, canvas.height); changeToWhite(imageData.data); // Update the canvas with the new data context.putImageData(imageData, 0, 0); }

The data will be passed as reference, which means any modification we make to it, it will change the value of the argument passed.

Inverting colors

A nice effect that doesn’t require much calculation is inverting the colors of an image.

Inverting a color value can be done using XOR operator (^) or this formula 255 - value (value must be between 0-255).

function invertColors(data) { for (var i = 0; i < data.length; i+= 4) { data[i] = data[i] ^ 255; // Invert Red data[i+1] = data[i+1] ^ 255; // Invert Green data[i+2] = data[i+2] ^ 255; // Invert Blue } } function onClick() { var imageData = context.getImageData(0, 0, canvas.width, canvas.height); invertColors(imageData.data); // Update the canvas with the new data context.putImageData(imageData, 0, 0); }

We are incrementing the loop by 4 instead of 1 as we did before, so we can from pixel to pixel that each fill 4 elements in the array.

The alpha value has not effect on inverting colors, so we skip it.

Brightness and contrast

Adjusting the brightness of an image can be done using the next formula: newValue = currentValue + 255 * (brightness / 100).

  • brightness must be between -100 and 100
  • currentValue is the current light value of either Red, Green or Blue.
  • newValue is the result of the current color light plus brightness

Adjusting the contrast of an image can be done with this formula:

factor = (259 * (contrast + 255)) / (255 * (259 - contrast)) color = GetPixelColor(x, y) newRed = Truncate(factor * (Red(color) - 128) + 128) newGreen = Truncate(factor * (Green(color) - 128) + 128) newBlue = Truncate(factor * (Blue(color) - 128) + 128)

The main calculation is getting the contrast factor that will be applied to each color value. Truncate is a function that make sure the value stay between 0 and 255.

Let’s write these functions into JavaScript:

function applyBrightness(data, brightness) { for (var i = 0; i < data.length; i+= 4) { data[i] += 255 * (brightness / 100); data[i+1] += 255 * (brightness / 100); data[i+2] += 255 * (brightness / 100); } } function truncateColor(value) { if (value < 0) { value = 0; } else if (value > 255) { value = 255; } return value; } function applyContrast(data, contrast) { var factor = (259.0 * (contrast + 255.0)) / (255.0 * (259.0 - contrast)); for (var i = 0; i < data.length; i+= 4) { data[i] = truncateColor(factor * (data[i] - 128.0) + 128.0); data[i+1] = truncateColor(factor * (data[i+1] - 128.0) + 128.0); data[i+2] = truncateColor(factor * (data[i+2] - 128.0) + 128.0); } }

In this case you don't need the truncateColor function as Uint8ClampedArray will truncate these values, but for the sake of translating the algorithm we added that in.

One thing to keep in mind is that, if you apply a brightness or contrast, there’s no way back to the previous state as the image data is overwritten. The original image data must be stored separately for reference if we want to reset to the original state. Keeping the image variable accessible to other functions will be helpful as you can use that image instead to redraw the canvas with the original image.

var image = document.getElementById('SourceImage'); function redrawImage() { context.drawImage(image, 0, 0); }

Using videos

To make it work with videos, we are going to take our initial image script and HTML code and make some small changes.

HTML

Change the Image element with a video element by replacing this line:

<img id="SourceImage" src="image.jpg">

...with this:

<video id="SourceVideo" src="video.mp4"></video> JavaScript

Replace this line:

var image = document.getElementById('SourceImage');

...with this:

var video = document.getElementById('SourceVideo');

To start working with the video, we have to wait until the video can be played.

video.addEventListener('canplay', function () { // Set the canvas the same width and height of the video canvas.width = video.videoWidth; canvas.height = video.videoHeight; // Play the video video.play(); // start drawing the frames drawFrame(video); });

The event canplay is triggered when enough data is available that the media can be played, at least for a couple of frames.

We cannot see any of the video displayed on the canvas because we are only displaying the first frame. We must execute drawFrame every n milliseconds to keep up with the video frames rate.

Inside drawFrame we call drawFrame again every 10ms.

function drawFrame(video) { context.drawImage(video, 0, 0); setTimeout(function () { drawFrame(video); }, 10); }

After we execute drawFrame, we create a loop executing drawFrame every 10ms — enough time to keep the video in sync in the canvas.

Adding the effect to the video

We can use the same function we created before for inverting colors:

function invertColors(data) { for (var i = 0; i < data.length; i+= 4) { data[i] = data[i] ^ 255; // Invert Red data[i+1] = data[i+1] ^ 255; // Invert Green data[i+2] = data[i+2] ^ 255; // Invert Blue } }

And add it into the drawFrame function:

function drawFrame(video) { context.drawImage(video, 0, 0); var imageData = context.getImageData(0, 0, canvas.width, canvas.height); invertColors(imageData.data); context.putImageData(imageData, 0, 0); setTimeout(function () { drawFrame(video); }, 10); }

We can add a button and toggle the effects:

function drawFrame(video) { context.drawImage(video, 0, 0); if (applyEffect) { var imageData = context.getImageData(0, 0, canvas.width, canvas.height); invertColors(imageData.data); context.putImageData(imageData, 0, 0); } setTimeout(function () { drawFrame(video); }, 10); }

Using camera

We are going to keep the same code we use for video with the only different is that we are going to change the video stream from a file to the camera stream using MediaDevices.getUserMedia

MediaDevices.getUserMedia is the new API deprecating the previous API MediaDevices.getUserMedia(). There's still browser support for the old version and some browser do not support the new version and we have to resort to polyfill to make sure the browser support one of them

First, remove the src attribute from the video element:

<video id="SourceVideo"><code></pre> <pre rel="JavaScript"><code class="language-javascript">// Set the source of the video to the camera stream function initCamera(stream) { video.src = window.URL.createObjectURL(stream); } if (navigator.mediaDevices.getUserMedia) { navigator.mediaDevices.getUserMedia({video: true, audio: false}) .then(initCamera) .catch(console.error) ); }

Live Demo

Effects

Everything we’ve covered so far is the foundation we need in order to create different effects on a video or image. There are a lot of different effects that we can use by transforming each color independently.

GrayScale

Converting a color to a grayscale can done in different ways using different formulas/techniques, to avoid getting too deep into the subject I will show you five of the formulas based on the GIMP desaturate tool and Luma:

Gray = 0.21R + 0.72G + 0.07B // Luminosity Gray = (R + G + B) ÷ 3 // Average Brightness Gray = 0.299R + 0.587G + 0.114B // rec601 standard Gray = 0.2126R + 0.7152G + 0.0722B // ITU-R BT.709 standard Gray = 0.2627R + 0.6780G + 0.0593B // ITU-R BT.2100 standard

What we want to find using these formulas is the brightness intensity level of each pixel color. The value will range from 0 (black) to 255 (white). These values will create a grayscale (black and white) effect.

This means that the brightest color will be closest to 255 and the darkest color closest to 0.

Live Demo

Duotones

The difference between duotone effect and grayscale effect are the two colors being used. On grayscale you have a gradient from black to white, while on duotone you can have a gradient from any color to any other color, blue to pink as an example.

Using the intensity value of the grayscale, we can replace this from the gradient values.

We need to create a gradient from ColorA to ColorB.

function createGradient(colorA, colorB) { // Values of the gradient from colorA to colorB var gradient = []; // the maximum color value is 255 var maxValue = 255; // Convert the hex color values to RGB object var from = getRGBColor(colorA); var to = getRGBColor(colorB); // Creates 256 colors from Color A to Color B for (var i = 0; i <= maxValue; i++) { // IntensityB will go from 0 to 255 // IntensityA will go from 255 to 0 // IntensityA will decrease intensity while instensityB will increase // What this means is that ColorA will start solid and slowly transform into ColorB // If you look at it in other way the transparency of color A will increase and the transparency of color B will decrease var intensityB = i; var intensityA = maxValue - intensityB; // The formula below combines the two color based on their intensity // (IntensityA * ColorA + IntensityB * ColorB) / maxValue gradient[i] = { r: (intensityA*from.r + intensityB*to.r) / maxValue, g: (intensityA*from.g + intensityB*to.g) / maxValue, b: (intensityA*from.b + intensityB*to.b) / maxValue }; } return gradient; } // Helper function to convert 6digit hex values to a RGB color object function getRGBColor(hex) { var colorValue; if (hex[0] === '#') { hex = hex.substr(1); } colorValue = parseInt(hex, 16); return { r: colorValue >> 16, g: (colorValue >> 8) & 255, b: colorValue & 255 } }

In short, we are creating an array of color values from Color A decreasing the intensity while going to Color B and increasing its intensity.

From #0096ff to #ff00f0 Zoomed representation of the color transition var gradients = [ {r: 32, g: 144, b: 254}, {r: 41, g: 125, b: 253}, {r: 65, g: 112, b: 251}, {r: 91, g: 96, b: 250}, {r: 118, g: 81, b: 248}, {r: 145, g: 65, b: 246}, {r: 172, g: 49, b: 245}, {r: 197, g: 34, b: 244}, {r: 220, g: 21, b: 242}, {r: 241, g: 22, b: 242}, ];

Above there is an example of a gradient of 10 colors values from #0096ff to #ff00f0.

Grayscale representation of the color transition

Now that we have the grayscale representation of the image, we can use it to map it to the duotone gradient values.

The duotone gradient has 256 colors while the grayscale has also 256 colors ranging from black (0) to white (255). That means a grayscale color value will map to a gradient element index.

var gradientColors = createGradient('#0096ff', '#ff00f0'); var imageData = context.getImageData(0, 0, canvas.width, canvas.height); applyGradient(imageData.data); for (var i = 0; i < data.length; i += 4) { // Get the each channel color value var redValue = data[i]; var greenValue = data[i+1]; var blueValue = data[i+2]; // Mapping the color values to the gradient index // Replacing the grayscale color value with a color for the duotone gradient data[i] = gradientColors[redValue].r; data[i+1] = gradientColors[greenValue].g; data[i+2] = gradientColors[blueValue].b; data[i+3] = 255; }

Live Demo

Conclusion

This topic can go more in depth or explain more effects. The homework for you is to find different algorithms you can apply to these skeleton examples.

Knowing how the pixels are structured on a canvas will allow you to create an unlimited number of effects, such as sepia, color blending, a green screen effect, image flickering/glitching, etc.

You can even create effects on the fly without using an image or a video:

The post Manipulating Pixels Using Canvas appeared first on CSS-Tricks.

Headless CMS: The Developers’ Best Friend

Css Tricks - Thu, 06/07/2018 - 3:24am

(This is a sponsored post.)

Your current CMS sucks! You know that for some time already but have not decided yet what your next solution should be.

You've noticed all the buzz around headless CMS but you're still not sure what is in it for you and how it can solve all your woes.

What is the difference between traditional on-premise CMS with REST API and true API-first cloud based CMS? How does headless CMS fit to your scenarios? What it brings being language agnostic for you?

Explore the new possibilities unlocked by the headless CMS and see how it will help you stand out.

Direct Link to ArticlePermalink

The post Headless CMS: The Developers’ Best Friend appeared first on CSS-Tricks.

The web can be anything we want it to be

Css Tricks - Wed, 06/06/2018 - 6:59am

I really enjoyed this chat between Bruce Lawson and Mustafa Kurtuldu where they talked about browser support and the health of the web. Bruce expands upon a lot of the thoughts in a post he wrote last year called World Wide Web, Not Wealthy Western Web where he writes:

...across the world, regardless of disposable income, regardless of hardware or network speed, people want to consume the same kinds of goods and services. And if your websites are made for the whole world, not just the wealthy Western world, then the next 4 billion people might consume the stuff that your organization makes.

Another highlight is where Bruce also mentions that, as web developers, we might think that we’ve all moved on from jQuery as a community, and yet there are still millions of websites that depend upon jQuery to function properly. It's an interesting anecdote and relevant to recent discussions about React making a run at being the next thing to replace jQuery:

I’m just gonna throw this bomb here:

React is the new jQuery

There you go.

— Sara Soueidan (@SaraSoueidan) May 24, 2018

However! The most interesting part of this particular discussion, for me at least, is where they talk about Flash and the impact it had on the design of CSS3 and HTML5. They both argue that despite Flash’s shortcomings and accessibility issues, it happened to show us all that the web can be much more than just a place to store some hypertext and that ultimately it can be anything we want it to be.

Direct Link to ArticlePermalink

The post The web can be anything we want it to be appeared first on CSS-Tricks.

Animate Images and Videos with curtains.js

Css Tricks - Wed, 06/06/2018 - 4:00am

While browsing the latest award-winning websites, you may notice a lot of fancy image distortion animations or neat 3D effects. Most of them are created with WebGL, an API allowing GPU-accelerated image processing effects and animations. They also tend to use libraries built on top of WebGL such as three.js or pixi.js. Both are very powerful tools to create respectively 2D and 3D scenes.

But, you should keep in mind that those libraries were not originally designed to create slideshows or animate DOM elements. There is a library designed just for that, though, and we’re going to cover how to use it here in this post.

WebGL, CSS Positioning, and Responsiveness

Say you’re working with a library like three.js or pixi.js and you want to use it to create interactions, like mouseover and scroll events on elements. You might run into trouble! How do you position your WebGL elements relative to the document and other DOM elements? How would handle responsiveness?

This is exactly what I had in mind when creating curtains.js.

Curatins.js allows you to create planes containing images and videos (in WebGL we will call them textures) that act like plain HTML elements, with position and size defined by CSS rules. But these planes can be enhanced with the endless possibilities of WebGL and shaders.

Wait, shaders?

Shaders are small programs written in GLSL that will tell your GPU how to render your planes. Knowing how shaders work is mandatory here because this is how we will handle animations. If you’ve never heard of them, you may want to learn the basics first. There are plenty of good websites to start learning them, like The Book of Shaders.

Now that you get the idea, let’s create our first plane!

Setup of a basic plane

To display our first plane, we will need a bit of HTML, CSS, and some JavaScript to create the plane. Then our shaders will animate it.

HTML

The HTML will be really simple here. We will create a <div> that will hold our canvas, and a div that will hold our image.

<body> <!-- div that will hold our WebGL canvas --> <div id="canvas"></div> <!-- div used to create our plane --> <div class="plane"> <!-- image that will be used as a texture by our plane --> <img src="path/to/my-image.jpg" /> </div> </body> CSS

We will will use CSS to make sure the <div> that wraps the canvas will be bigger than our plane, and apply any size to the plane div. (Our WebGL plane will have the exact same size and positions of this div.)

body { /* make the body fit our viewport */ position: relative; width: 100%; height: 100vh; margin: 0; /* hide scrollbars */ overflow: hidden; } #canvas { /* make the canvas wrapper fit the document */ position: absolute; top: 0; right: 0; bottom: 0; left: 0; } .plane { /* define the size of your plane */ width: 80%; max-width: 1400px; height: 80vh; position: relative; top: 10vh; margin: 0 auto; } .plane img { /* hide the img element */ display: none; } JavaScript

There's a bit more work in the JavaScript. We need to instantiate our WebGL context, create a plane with uniform parameters, and use it.

window.onload = function() { // pass the id of the div that will wrap the canvas to set up our WebGL context and append the canvas to our wrapper var webGLCurtain = new Curtains("canvas"); // get our plane element var planeElement = document.getElementsByClassName("plane")[0]; // set our initial parameters (basic uniforms) var params = { vertexShaderID: "plane-vs", // our vertex shader ID fragmentShaderID: "plane-fs", // our framgent shader ID uniforms: { time: { name: "uTime", // uniform name that will be passed to our shaders type: "1f", // this means our uniform is a float value: 0, }, } } // create our plane mesh var plane = webGLCurtain.addPlane(planeElement, params); // use the onRender method of our plane fired at each requestAnimationFrame call plane.onRender(function() { plane.uniforms.time.value++; // update our time uniform value }); } Shaders

We need to write the vertex shader. It won’t be doing much except position our plane based on the model view and projection matrix and pass varyings to the fragment shader:

<!-- vertex shader --> <script id="plane-vs" type="x-shader/x-vertex"> #ifdef GL_ES precision mediump float; #endif // those are the mandatory attributes that the lib sets attribute vec3 aVertexPosition; attribute vec2 aTextureCoord; // those are mandatory uniforms that the lib sets and that contain our model view and projection matrix uniform mat4 uMVMatrix; uniform mat4 uPMatrix; // if you want to pass your vertex and texture coords to the fragment shader varying vec3 vVertexPosition; varying vec2 vTextureCoord; void main() { // get the vertex position from its attribute vec3 vertexPosition = aVertexPosition; // set its position based on projection and model view matrix gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0); // set the varyings vTextureCoord = aTextureCoord; vVertexPosition = vertexPosition; } </script>

Now our fragment shader. This is where we will add a little displacement effect based on our time uniform and the texture coordinates.

<!-- fragment shader --> <script id="plane-fs" type="x-shader/x-fragment"> #ifdef GL_ES precision mediump float; #endif // get our varyings varying vec3 vVertexPosition; varying vec2 vTextureCoord; // the uniform we declared inside our javascript uniform float uTime; // our texture sampler (this is the lib default name, but it could be changed) uniform sampler2D uSampler0; void main() { // get our texture coords vec2 textureCoord = vTextureCoord; // displace our pixels along both axis based on our time uniform and texture UVs // this will create a kind of water surface effect // try to comment a line or change the constants to see how it changes the effect // reminder : textures coords are ranging from 0.0 to 1.0 on both axis const float PI = 3.141592; textureCoord.x += ( sin(textureCoord.x * 10.0 + ((uTime * (PI / 3.0)) * 0.031)) + sin(textureCoord.y * 10.0 + ((uTime * (PI / 2.489)) * 0.017)) ) * 0.0075; textureCoord.y += ( sin(textureCoord.y * 20.0 + ((uTime * (PI / 2.023)) * 0.023)) + sin(textureCoord.x * 20.0 + ((uTime * (PI / 3.1254)) * 0.037)) ) * 0.0125; gl_FragColor = texture2D(uSampler0, textureCoord); } </script>

Et voilà! You’re all done, and if everything went well, you should be seeing something like this.

See the Pen curtains.js basic plane by Martin Laxenaire (@martinlaxenaire) on CodePen.

Adding 3D and interactions

Alright, that’s pretty cool so far, but we started this post talking about 3D and interactions, so let’s look at how we could add those in.

About vertices

To add a 3D effect we would have to change the plane vertices position inside the vertex shader. However in our first example, we did not specify how many vertices our plane should have, so it was created with a default geometry containing six vertices forming two triangles :

In order to get decent 3D animations, we would need more triangles, thus more vertices:

This plane has five segments along its width and five segments along its height. As a result, we have 50 triangles and 150 total vertices. Refactoring our JavaScript

Fortunately, it is easy to specify our plane definition as it could be set inside our initial parameters.

We are also going to listen to mouse position to add a bit of interaction. To do it properly, we will have to wait for the plane to be ready, convert our mouse document coordinates to our WebGL clip space coordinates and send them to the shaders as a uniform.

// we are using window onload event here but this is not mandatory window.onload = function() { // track the mouse positions to send it to the shaders var mousePosition = { x: 0, y: 0, }; // pass the id of the div that will wrap the canvas to set up our WebGL context and append the canvas to our wrapper var webGLCurtain = new Curtains("canvas"); // get our plane element var planeElement = document.getElementsByClassName("plane")[0]; // set our initial parameters (basic uniforms) var params = { vertexShaderID: "plane-vs", // our vertex shader ID fragmentShaderID: "plane-fs", // our framgent shader ID widthSegments: 20, heightSegments: 20, // we now have 20*20*6 = 2400 vertices ! uniforms: { time: { name: "uTime", // uniform name that will be passed to our shaders type: "1f", // this means our uniform is a float value: 0, }, mousePosition: { // our mouse position name: "uMousePosition", type: "2f", // notice this is a length 2 array of floats value: [mousePosition.x, mousePosition.y], }, mouseStrength: { // the strength of the effect (we will attenuate it if the mouse stops moving) name: "uMouseStrength", // uniform name that will be passed to our shaders type: "1f", // this means our uniform is a float value: 0, }, } } // create our plane mesh var plane = webGLCurtain.addPlane(planeElement, params); // once our plane is ready, we could start listening to mouse/touch events and update its uniforms plane.onReady(function() { // set a field of view of 35 to exagerate perspective // we could have done it directly in the initial params plane.setPerspective(35); // listen our mouse/touch events on the whole document // we will pass the plane as second argument of our function // we could be handling multiple planes that way document.body.addEventListener("mousemove", function(e) { handleMovement(e, plane); }); document.body.addEventListener("touchmove", function(e) { handleMovement(e, plane); }); }).onRender(function() { // update our time uniform value plane.uniforms.time.value++; // continually decrease mouse strength plane.uniforms.mouseStrength.value = Math.max(0, plane.uniforms.mouseStrength.value - 0.0075); }); // handle the mouse move event function handleMovement(e, plane) { // touch event if(e.targetTouches) { mousePosition.x = e.targetTouches[0].clientX; mousePosition.y = e.targetTouches[0].clientY; } // mouse event else { mousePosition.x = e.clientX; mousePosition.y = e.clientY; } // convert our mouse/touch position to coordinates relative to the vertices of the plane var mouseCoords = plane.mouseToPlaneCoords(mousePosition.x, mousePosition.y); // update our mouse position uniform plane.uniforms.mousePosition.value = [mouseCoords.x, mouseCoords.y]; // reassign mouse strength plane.uniforms.mouseStrength.value = 1; } }

Now that our JavaScript is done, we have to rewrite our shaders so that they’ll use our mouse position uniform.

Refactoring the shaders

Let’s look at our vertex shader first. We have three uniforms that we could use for our effect:

  1. the time which is constantly increasing
  2. the mouse position
  3. our mouse strength, which is constantly decreasing until the next mouse move

We will use all three of them to create a kind of 3D ripple effect.

<script id="plane-vs" type="x-shader/x-vertex"> #ifdef GL_ES precision mediump float; #endif // those are the mandatory attributes that the lib sets attribute vec3 aVertexPosition; attribute vec2 aTextureCoord; // those are mandatory uniforms that the lib sets and that contain our model view and projection matrix uniform mat4 uMVMatrix; uniform mat4 uPMatrix; // our time uniform uniform float uTime; // our mouse position uniform uniform vec2 uMousePosition; // our mouse strength uniform float uMouseStrength; // if you want to pass your vertex and texture coords to the fragment shader varying vec3 vVertexPosition; varying vec2 vTextureCoord; void main() { vec3 vertexPosition = aVertexPosition; // get the distance between our vertex and the mouse position float distanceFromMouse = distance(uMousePosition, vec2(vertexPosition.x, vertexPosition.y)); // this will define how close the ripples will be from each other. The bigger the number, the more ripples you'll get float rippleFactor = 6.0; // calculate our ripple effect float rippleEffect = cos(rippleFactor * (distanceFromMouse - (uTime / 120.0))); // calculate our distortion effect float distortionEffect = rippleEffect * uMouseStrength; // apply it to our vertex position vertexPosition += distortionEffect / 15.0; gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0); // varyings vTextureCoord = aTextureCoord; vVertexPosition = vertexPosition; } </script>

As for the fragment shader, we are going to keep it simple. We are going to fake lights and shadows based on each vertex position:

<script id="plane-fs" type="x-shader/x-fragment"> #ifdef GL_ES precision mediump float; #endif // get our varyings varying vec3 vVertexPosition; varying vec2 vTextureCoord; // our texture sampler (this is the lib default name, but it could be changed) uniform sampler2D uSampler0; void main() { // get our texture coords vec2 textureCoords = vTextureCoord; // apply our texture vec4 finalColor = texture2D(uSampler0, textureCoords); // fake shadows based on vertex position along Z axis finalColor.rgb -= clamp(-vVertexPosition.z, 0.0, 1.0); // fake lights based on vertex position along Z axis finalColor.rgb += clamp(vVertexPosition.z, 0.0, 1.0); // handling premultiplied alpha (useful if we were using a png with transparency) finalColor = vec4(finalColor.rgb * finalColor.a, finalColor.a); gl_FragColor = finalColor; } </script>

And there you go!

See the Pen curtains.js ripple effect example by Martin Laxenaire (@martinlaxenaire) on CodePen.

With these two simple examples, we’ve seen how to create a plane and interact with it.

Videos and displacement shaders

Our last example will create a basic fullscreen video slideshow using a displacement shader to enhance the transitions.

Displacement shader concept

The displacement shader will create a nice distortion effect. It will be written inside our fragment shader using a grayscale picture and will offset the pixel coordinates of the videos based on the texture RGB values. Here’s the image we will be using:

The effect will be calculated based on each pixel RGB value, with a black pixel being [0, 0, 0] and a white pixel [1, 1, 1] (GLSL equivalent for [255, 255, 255]). To simplify, we will use only the red channel value, as with a grayscale image red, green and blue are always equal.

You can try to create your own grayscale image (it works great with geometric shape ) to get your unique transition effect.

Multiple textures and videos

A plane can have more than one texture simply by adding multiple image tags. This time, instead of images we want to use videos. We just have to replace the <img /> tags with a <video /> one. However there are two things to know when it comes to video:

  • The video will always fit the exact size of the plane, which means your plane has to have the same width/height ratio as your video. This is not a big deal tho because it is easy to handle with CSS.
  • On mobile devices, we can’t autoplay videos without a user gesture, like a click event. It is therefore safer to add a "enter site" button to display and launch our videos.
HTML

The HTML is still pretty straightforward. We will create our canvas div wrapper, our plane div containing the textures and a button to trigger the video autoplay. Just notice the use of the data-sampler attribute on the image and video tags—it will be useful inside our fragment shader.

<body> <div id="canvas"></div> <!-- this div will handle the fullscreen video sizes and positions --> <div class="plane-wrapper"> <div class="plane"> <!-- notice here we are using the data-sampler attribute to name our sampler uniforms --> <img src="path/to/displacement.jpg" data-sampler="displacement" /> <video src="path/to/video.mp4" data-sampler="firstTexture"></video> <video src="path/to/video-2.mp4" data-sampler="secondTexture"></video> </div> </div> <div id="enter-site-wrapper"> <span id="enter-site"> Click to enter site </span> </div> </body> CSS

The stylesheet will handle a few things: display the button and hide the canvas before the user has entered the site, size and position our plane-wrapper div to handle fullscreen responsive videos.

@media screen { body { margin: 0; font-size: 18px; font-family: 'PT Sans', Verdana, sans-serif; background: #212121; line-height: 1.4; height: 100vh; width: 100vw; overflow: hidden; } /*** canvas ***/ #canvas { position: absolute; top: 0; right: 0; bottom: 0; left: 0; z-index: 10; /* hide the canvas until the user clicks the button */ opacity: 0; transition: opacity 0.5s ease-in; } /* display the canvas */ .video-started #canvas { opacity: 1; } .plane-wrapper { position: absolute; /* center our plane wrapper */ left: 50%; top: 50%; transform: translate(-50%, -50%); z-index: 15; } .plane { position: absolute; top: 0; right: 0; bottom: 0; left: 0; /* tell the user he can click the plane */ cursor: pointer; } /* hide the original image and videos */ .plane img, .plane video { display: none; } /* center the button */ #enter-site-wrapper { display: flex; justify-content: center; align-items: center; align-content: center; position: absolute; top: 0; right: 0; bottom: 0; left: 0; z-index: 30; /* hide the button until everything is ready */ opacity: 0; transition: opacity 0.5s ease-in; } /* show the button */ .curtains-ready #enter-site-wrapper { opacity: 1; } /* hide the button after the click event */ .curtains-ready.video-started #enter-site-wrapper { opacity: 0; pointer-events: none; } #enter-site { padding: 20px; color: white; background: #ee6557; max-width: 200px; text-align: center; cursor: pointer; } } /* fullscreen video responsive */ @media screen and (max-aspect-ratio: 1920/1080) { .plane-wrapper { height: 100vh; width: 177vh; } } @media screen and (min-aspect-ratio: 1920/1080) { .plane-wrapper { width: 100vw; height: 56.25vw; } } JavaScript

As for the JavaScript, we will go like this:

  • Set a couple variables to store our slideshow state
  • Create the Curtains object and add the plane to it
  • When the plane is ready, listen to a click event to start our videos playback (notice the use of the playVideos() method). Add another click event to switch between the two videos.
  • Update our transition timer uniform inside the onRender() method
window.onload = function() { // here we will handle which texture is visible and the timer to transition between images var activeTexture = 1; var transitionTimer = 0; // set up our WebGL context and append the canvas to our wrapper var webGLCurtain = new Curtains("canvas"); // get our plane element var planeElements = document.getElementsByClassName("plane"); // some basic parameters var params = { vertexShaderID: "plane-vs", fragmentShaderID: "plane-fs", imageCover: false, // our displacement texture has to fit the plane uniforms: { transitionTimer: { name: "uTransitionTimer", type: "1f", value: 0, }, }, } var plane = webGLCurtain.addPlane(planeElements[0], params); // create our plane plane.onReady(function() { // display the button document.body.classList.add("curtains-ready"); // when our plane is ready we add a click event listener that will switch the active texture value planeElements[0].addEventListener("click", function() { if(activeTexture == 1) { activeTexture = 2; } else { activeTexture = 1; } }); // click to play the videos document.getElementById("enter-site").addEventListener("click", function() { // display canvas and hide the button document.body.classList.add("video-started"); // play our videos plane.playVideos(); }, false); }).onRender(function() { // increase or decrease our timer based on the active texture value // at 60fps this should last one second if(activeTexture == 2) { transitionTimer = Math.min(60, transitionTimer + 1); } else { transitionTimer = Math.max(0, transitionTimer - 1); } // update our transition timer uniform plane.uniforms.transitionTimer.value = transitionTimer; }); } Shaders

This is where all the magic will occur. Like in our first example, the vertex shader won’t do much and you’ll have to focus on the fragment shader that will create a “dive in" effect:

<script id="plane-vs" type="x-shader/x-vertex"> #ifdef GL_ES precision mediump float; #endif // default mandatory variables attribute vec3 aVertexPosition; attribute vec2 aTextureCoord; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; // varyings varying vec3 vVertexPosition; varying vec2 vTextureCoord; // custom uniforms uniform float uTransitionTimer; void main() { vec3 vertexPosition = aVertexPosition; gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0); // varyings vTextureCoord = aTextureCoord; vVertexPosition = vertexPosition; } </script> <script id="plane-fs" type="x-shader/x-fragment"> #ifdef GL_ES precision mediump float; #endif varying vec3 vVertexPosition; varying vec2 vTextureCoord; // custom uniforms uniform float uTransitionTimer; // our textures samplers // notice how it matches our data-sampler attributes uniform sampler2D firstTexture; uniform sampler2D secondTexture; uniform sampler2D displacement; void main( void ) { // our texture coords vec2 textureCoords = vec2(vTextureCoord.x, vTextureCoord.y); // our displacement texture vec4 displacementTexture = texture2D(displacement, textureCoords); // our displacement factor is a float varying from 1 to 0 based on the timer float displacementFactor = 1.0 - (cos(uTransitionTimer / (60.0 / 3.141592)) + 1.0) / 2.0; // the effect factor will tell which way we want to displace our pixels // the farther from the center of the videos, the stronger it will be vec2 effectFactor = vec2((textureCoords.x - 0.5) * 0.75, (textureCoords.y - 0.5) * 0.75); // calculate our displaced coordinates to our first video vec2 firstDisplacementCoords = vec2(textureCoords.x - displacementFactor * (displacementTexture.r * effectFactor.x), textureCoords.y - displacementFactor * (displacementTexture.r * effectFactor.y)); // opposite displacement effect on the second video vec2 secondDisplacementCoords = vec2(textureCoords.x - (1.0 - displacementFactor) * (displacementTexture.r * effectFactor.x), textureCoords.y - (1.0 - displacementFactor) * (displacementTexture.r * effectFactor.y)); // apply the textures vec4 firstDistortedColor = texture2D(firstTexture, firstDisplacementCoords); vec4 secondDistortedColor = texture2D(secondTexture, secondDisplacementCoords); // blend both textures based on our displacement factor vec4 finalColor = mix(firstDistortedColor, secondDistortedColor, displacementFactor); // handling premultiplied alpha finalColor = vec4(finalColor.rgb * finalColor.a, finalColor.a); // apply our shader gl_FragColor = finalColor; } </script>

Here’s our little video slideshow with a cool transition effect:

See the Pen curtains.js video slideshow by Martin Laxenaire (@martinlaxenaire) on CodePen.

This example is a great way to show you how to create a slideshow with curtains.js: you might want to use images instead of videos, change the displacement texture, modify the fragment shader or even add more slides…

Going deeper

We’ve just scraped the surface of what’s possible with curtains.js. You could try to create multiple planes with a cool mouse over effect for your article thumbs for example. The possibilities are almost endless.

If you want to see more examples covering all those basics usages, you can check the library website or the GitHub repo.

The post Animate Images and Videos with curtains.js appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.