Developer News

Empathy Prompts

Css Tricks - Thu, 07/13/2017 - 2:44am

Activities to help you develop empathy for the variety of people that use your thing. Eric Bailey:

This project is geared towards anyone involved with making digital products. It is my hope that this reaches both:

  • People who are not necessarily involved in the day-to-day part of the process, but who help shape things like budget, timeline, and scope, and
  • People who work every day to help to give these products shape and form

These prompts are intended to help build empathy, not describe any one person's experience. These prompts are not intended to tokenize the experience of the individuals experiencing these conditions.

I love the "share" link on the page. It's basically window.prompt("go ahead");

Direct Link to ArticlePermalink

Empathy Prompts is a post from CSS-Tricks

Net Neutraility

Css Tricks - Wed, 07/12/2017 - 6:46am

I'm linking up a "call to action" style site here as it's nicely done and explain the situation fairly well. Right now, there are rules (in the United States) against internet providers prioritizing speed and access on a site-by-site basis. If they could, they probably would, and that's straight up bad for the internet.

In other "good for the internet" news... does my site need HTTPS?

Direct Link to ArticlePermalink

Net Neutraility is a post from CSS-Tricks

(Now More Than Ever) You Might Not Need jQuery

Css Tricks - Wed, 07/12/2017 - 2:29am

The DOM and native browser API's have improved by leaps and bounds since jQuery's release all the way back in 2006. People have been writing "You Might Not Need jQuery" articles since 2013 (see this classic site and this classic repo). I don't want to rehash old territory, but a good bit has changed in browser land since the last You Might Not Need jQuery article you might have stumbled upon. Browsers continue to implement new APIs that take the pain away from library-free development, many of them directly copied from jQuery.

Let's go through some new vanilla alternatives to jQuery methods.

Remove an element from the page

Remember the maddeningly roundabout way you had to remove an element from the page with vanilla DOM? el.parentNode.removeChild(el);? Here's a comparison of the jQuery way and the new improved vanilla way.

jQuery:

var $elem = $(".someClass") //select the element $elem.remove(); //remove the element

Without jQuery:

var elem = document.querySelector(".someClass"); //select the element elem.remove() //remove the element

For the rest of this post, we'll assume that $elem a jQuery-selected set of elements, and elem is a native JavaScript-selected DOM element.

Prepend an element

jQuery:

$elem.prepend($someOtherElem);

Without jQuery:

elem.prepend(someOtherElem); Insert an element before another element

jQuery:

$elem.before($someOtherElem);

Without jQuery:

elem.before(someOtherElem); Replace an element with another element

jQuery:

$elem.replaceWith($someOtherElem);

Without jQuery:

elem.replaceWith(someOtherElem); Find the closest ancestor that matches a given selector

jQuery:

$elem.closest("div");

Without jQuery:

elem.closest("div"); Browser Support of DOM manipulation methods

These methods now have a decent level of browser support:

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari544149NoNo10Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox10.0-10.2NoNo565954

They are also currently being implemented in Edge.

Fade in an Element

jQuery:

$elem.fadeIn();

By writing our own CSS we have far more control over how we animate our element. Here I'll do a simple fade.

.thingy { display: none; opacity: 0; transition: .8s; } elem.style.display = "block"; requestAnimationFrame(() => elem.style.opacity = 1); Call an event handler callback only once

jQuery:

$elem.one("click", someFunc);

In the past when writing plain JavaScript, we had to call removeEventListener inside of the callback function.

function dostuff() { alert("some stuff happened"); this.removeEventListener("click", dostuff); } var button = document.querySelector("button"); button.addEventListener("click", dostuff);

Now things are a lot cleaner. You might have seen the third optional parameter sometimes passed into addEventListener. It's a boolean to decide between event capturing or event bubbling. Nowadays, however, the third argument can alternatively be a configuration object.

elem.addEventListener('click', someFunc, { once: true, });

If you still want to use event capturing as well as have the callback called only once, then you can specify that in the configuration object as well:

elem.addEventListener('click', myClickHandler, { once: true, capture: true }); Animation

jQuery's .animate() method is pretty limited.

$elem.animate({ width: "70%", opacity: 0.4, marginLeft: "0.6in", fontSize: "3em", borderWidth: "10px" }, 1500);

The docs say "All animated properties should be animated to a single numeric value, except as noted below; most properties that are non-numeric cannot be animated using basic jQuery functionality." This rules out transforms, and you need a plugin just to animate colors. You'd be far better off with the new Web Animations API.

var elem = document.querySelector('.animate-me'); elem.animate([ { transform: 'translateY(-1000px) scaleY(2.5) scaleX(.2)', transformOrigin: '50% 0', filter: 'blur(40px)', opacity: 0 }, { transform: 'translateY(0) scaleY(1) scaleX(1)', transformOrigin: '50% 50%', filter: 'blur(0)', opacity: 1 } ], 1000); Ajax

Another key selling point of jQuery in the past has been Ajax. jQuery abstracted away the ugliness of XMLHttpRequest:

$.ajax('https://some.url', { success: (data) => { /* do stuff with the data */ } });

The new fetch API is a superior replacement for XMLHttpRequest and is now supported by all modern browsers.

fetch('https://some.url') .then(response => response.json()) .then(data => { // do stuff with the data });

Admittedly fetch can be a bit more complicated than this small code sample. For example, the Promise returned from fetch() won't reject on HTTP error status. It is, however, far more versatile than anything built on top of XMLHttpRequest.

If we want ease of use though, there is a simpler option that has gained popularity - but it's not native to the browser, which brings me onto...

The Rise of the Micro-Library

Axios is a popular library for Ajax. It is a great example of a micro-library - a library designed to do just one thing. While most libraries will not be as well tested as jQuery, they can often an appealing alternative to the jQuery behemoth.

(Almost) Everything Can Be Polyfilled

So now you're aware that the DOM is now pretty nice to work with! But perhaps you've looked at these developments only to think "oh well, still need to support IE 9 so I better use jQuery". Most of the time it doesn't really matter what Can I Use says about a certain feature you want to utilize. You can use whatever you like and polyfills can fill in the gaps. There was a time when if you wanted to use a fancy new browser feature, you had to find a polyfill, and then include it on your page. Doing this for all the features missing in IE9 would be an arduous task. Now it's as simple

<script src="https://cdn.polyfill.io/v2/polyfill.min.js"></script>

This simple script tag can polyfill just about anything. If you haven't heard about this polyfill service from the Financial Times you can read about it at polyfill.io.

Iterating a NodeList in 2017

jQuery's massive adoption hasn't solely been fostered by its reassuring ironing out of browser bugs and inconsistencies in IE Relics. Today jQuery has one remaining selling point: iteration.

Iterable NodeLists are so fundamentally important to the quality of the DOM. Unsurprisingly I now use React for most of my coding instead. — John Resig (@jeresig) April 29, 2016

It's defied rationality that NodeLists aren't iterable. Developers have had to jump through hoops to make them so. A classic for loop may be the most performance optimised approach, but sure isn't something I enjoy typing. And so we ended up with this ugliness:

var myArrayFromNodeList = [].slice.call(document.querySelectorAll('li'));

Or:

[].forEach.call(myNodeList, function (item) {...}

More recently we've been able to use Array.from, a terser, more elegant way of turning a nodeList into an array.

Array.from(querySelectorAll('li')).forEach((li) => /* do something with li */);

But the big news is that NodeLists are now iterable by default.

It's about time we have iterable NodeLists! https://t.co/nIT5uHALpW &#x1f389;&#x1f389;&#x1f389; Been asking for this for years! https://t.co/edb0TTSdop

— John Resig (@jeresig) April 29, 2016

Now simply type:

document.querySelectorAll('li').forEach((li) => /* do some stuff */);

Edge is the last modern browser to not support iterable NodeLists but is currently working on it.

Is jQuery Slow?

jQuery may be faster than sloppily written vanilla JS, but that's just a good reason to learn JavaScript better! Paul Irish was a contributor to the jQuery project and concluded:

Performance recommendation: Do not use jQuery's hide() method. Ever. https://t.co/zEQf6F54p6
Classes are your friend.

— Paul Irish (@paul_irish) February 8, 2015

Here's what the creator of jQuery has to say about learning the native DOM in his (totally essential) Javascript book Secrets of the JavaScript Ninja:

"Why do you need to understand how it works if the library will take care of it for you? The most compelling reason is performance. Understanding how DOM modification works in libraries can allow you to write better and faster code."

What I Dislike About jQuery

Rather than smoothing over only the remaining ugly parts of certain browser API's, jQuery seeks to replace them all wholesale. By returning a jQuery object rather than a NodeList, built-in browser methods are essentially off limits, meaning you're locked into the jQuery way of doing everything. For beginners, what once made front-end scripting approachable is now a hindrance, as it essentially means there are two duplicate ways of doing everything. If you want to read others code with ease and apply to both jobs that require vanilla JS and jobs that require jQuery, you have twice as much to learn. There are, however, libraries that have adopted an API that will be reassuringly familiar to jQuery addicts, but that return a NodeList rather than an object...

Can't Live Without $?

Perhaps you've grown fond of that jQuery $. Certain micro-libraries have sought to emulate the jQuery API.

  • Lea Verou, an Invited Expert at the W3C CSS Working Group, who herself penned the article jQuery Considered Harmful is the author of Bliss.js. Bliss uses a familiar $ syntax but returns a NodeList.
  • Paul Irish, meanwhile, released Bling.js "because you want the $ of jQuery without the jQuery."
  • Remy Sharp offered a similar micro-library, aptly named min.js.

I'm no anti-jQuery snob. Some great developers still choose to use it. If you're already comfortable using it and at home with its API, there's no huge reason to ditch it. Ultimately there are people who use jQuery and know what a closure is and who write enterprise-level web apps, and people who use vanilla JS who don't. Plenty of jobs still list it as a required skill. For anybody starting out though, it looks like an increasingly bad choice. Internet Explorer 11 is thankfully the final version of that infernal contraption. As soon as IE dies the entire browser landscape will be evergreen, and jQuery will increasingly be seen as a bygone relic from the DOM's dirty past.

(Now More Than Ever) You Might Not Need jQuery is a post from CSS-Tricks

Transitioning Gradients

Css Tricks - Wed, 07/12/2017 - 2:27am

Keith J. Grant:

In CSS, you can't transition a background gradient. It jumps from one gradient to the other immediately, with no smooth transition between the two.

He documents a clever tactic of positioning a pseudo element covering the element with a different background and transitioning the opacity of that pseudo element. You also need a little z-index trickery to ensure any content inside stays visible.

Gosh, I remember a time not so long ago pseudo elements weren't transitionable!

I figured as long as we're using a pseudo element here, I'd document a few others ways as well. We could always move the position of a longer element, making it look like a gradient transition. Or, we could use a half-transparent gradient and transition a solid background behind it.

Direct Link to ArticlePermalink

Transitioning Gradients is a post from CSS-Tricks

Let’s Talk About Speech CSS

Css Tricks - Tue, 07/11/2017 - 4:43am

Boston, like many large cities, has a subway system. Commuters on it are accustomed to hearing regular public address announcements.

Riders simply tune out some announcements, such as the pre-recorded station stop names repeated over and over. Or public service announcements from local politicians and celebrities—again, kind of repetitive and not worth paying attention to after the first time. Most important are service alerts, which typically deliver direct and immediate information riders need to take action on.

An informal priority

A regular rider's ear gets trained to listen for important announcements, passively, while fiddling around on a phone or zoning out after a hard day of work. It's not a perfect system—occasionally I'll find myself trapped on a train that's been pressed into express service.

But we shouldn't remove lower priority announcements. It's unclear what kind of information will be important to whom: tourists, new residents, or visiting friends and family, to name a few.

A little thought experiment: Could this priority be more formalized via sound design? The idea would be to use different voices consistently or to prefix certain announcements with specific tones. I've noticed an emergent behavior from the train operators that kind of mimics this: Sometimes they'll use a short blast of radio static to get riders' attention before an announcement.

Opportunities

I've been wondering if this kind of thinking can be extended to craft better web experiences for everyone. After all, sound is enjoying a renaissance on the web: the Web Audio API has great support, and most major operating systems now ship with built-in narration tools. Digital assistants such as Siri are near-ubiquitous, and podcasts and audiobooks are a normal part of people's media diet.

Deep in CSS'—ahem—labyrinthine documentation, are references to two Media Types that speak to the problem: aural and speech. The core idea is pretty simple: audio-oriented CSS tells a digitized voice how it should read content, the same as how regular CSS tells the browser how to visually display content. Of the two, aural has been deprecated. speech Media Type detection is also tricky, as a screen reader potentially may not communicate its presence to the browser.

The CSS 3 Speech Module, the evolved version of the aural Media Type, looks the most promising. Like display: none;, it is part of the small subset of CSS that has an impact on screen reader behavior. It uses traditional CSS property/value pairings alongside existing declarations to create an audio experience that has parity with the visual box model.

code { background-color: #292a2b; color: #e6e6e6; font-family: monospace; speak: literal-punctuation; /* Reads all punctuation out loud in iOS VoiceOver */ } Just because you can, doesn't mean you should

In his book Building Accessible Websites, published in 2003, author/journalist/accessibility researcher Joe Clark outlines some solid reasons for never altering the way spoken audio is generated. Of note:

Support

Many browsers don't honor the technology, so writing the code would be a waste of effort. Simple and direct.

Proficiency

Clark argues that developers shouldn't mess with the way spoken content is read because they lack training to "craft computer voices, position them in three-dimensional space, and specify background music and tones for special components."

This may be the case for some, but ours is an industry of polymaths. I've known plenty of engineers who develop enterprise-scale technical architecture by day and compose music by night. There's also the fact that we've kind of already done it.

The point he's driving at—crafting an all-consuming audio experience is an impossible task—is true. But the situation has changed. An entire audio universe doesn't need to be created from whole cloth any more. Digitized voices are now supplied by most operating systems, and the number of inexpensive/free stock sound and sound editing resources is near-overwhelming.

Appropriateness

For users of screen readers, the reader's voice is their interface for the web. As a result, users can be very passionate about their screen reader's voice. In this light, Clark argues for not changing how a screen reader sounds out content that is not in its dictionary.

Screen readers have highly considered defaults for handling digital interfaces, and probably tackle content types many developers would not even think to consider. For example, certain screen readers use specialized sound cues to signal behaviors. NVDA uses a series of tones to communicate an active progress bar:

Altering screen reader behavior effectively alters the user's expected experience. Sudden, unannounced changes can be highly disorienting and can be met with fear, anger, and confusion.

A good parallel would be if developers were to change how a mouse scrolls and clicks on a per-website basis. This type of unpredictability is not a case of annoying someone, it's a case of inadvertently rendering content more difficult to understand or changing default operation to something unfamiliar.

My voice is not your voice

A screen reader's voice is typically tied to the region and language preference set in the operating system.

For example, iOS contains a setting for not just for English, but for variations that include United Kingdom, Ireland, Singapore, New Zealand and five others. A user picking UK English will, among other things, find their Invert Colors feature renamed to "Invert Colours."

However, a user's language preference setting may not be their primary language, the language of their country of origin, or the language of the country they're currently living in. My favorite example is my American friend who set the voice on his iPhone to UK English to make Siri sound more like a butler.

UK English is also an excellent reminder that regional differences are a point of consideration, y'all.

Another consideration is biological and environmental hearing loss. It can manifest with a spectrum of severity, so the voice-balance property may have the potential to "move" the voice outside of someone's audible range.

Also, the speed the voice reads out content may be too fast for some or too slow for others. Experienced screen reader operators may speed up the rate of speech, much as some users quickly scroll a page to locate information they need. A user new to screen readers, or a user reading about an unfamiliar topic may desire a slower speaking rate to keep from getting overwhelmed.

And yet

Clark admits that some of his objections exist only in the realm of the academic. He cites the case of a technologically savvy blind user who uses the power of CSS' interoperability to make his reading experience pleasant.

According to my (passable) research skills, not much work has been done in asking screen reader users their preferences for this sort of technology in the fourteen years since the book was published. It's also important to remember that screen reader users aren't necessarily blind, nor are they necessarily technologically illiterate.

The idea here would be to treat CSS audio manipulation as something a user can opt into, either globally or on a per-site basis. Think web extensions like Greasemonkey/Tampermonkey, or when websites ask permission to send notifications. It could be as simple as the kinds of preference toggles users are already used to interacting with:

A fake screenshot simulating a preference in NVDA that would allow the user to enable or disable CSS audio manipulation.

There is already a precedent for this. Accessibility Engineer Léonie Watson notes that JAWS—another popular screen reader—“has a built in sound scheme that enables different voices for different parts of web pages. This suggests that perhaps there is some interest in enlivening the audio experience for screen reader users.”

Opt-in also supposes features such as whitelists to prevent potential abuses of CSS-manipulated speech. For example, a user could only allow certain sites with CSS-manipulated content to be read, or block things like unsavory ad networks who use less-than-scrupulous practices to get attention.

Opinions: I have a few

In certain situations a screen reader can't know the context of content but can accept a human-authored suggestion on how to correctly parse it. For example, James Craig's 2011 WWDC video outlines using speak-as values to make street names and code read accurately (starts at the 15:36 mark, requires Safari to view).

In programming, every symbol counts. Being able to confidently state the relationship between things in code is a foundational aspect of programming. The case of thisOne != thisOtherOne being read as "this one is equal to this other one" when the intent was "this one is not equal to this other one" is an especially compelling concern.

Off the top of my head, other examples where this kind of audio manipulation would be desirable are:

  • Ensuring names are pronounced properly.
  • Muting pronunciation of icons (especially icons made with web fonts) in situations where the developer can't edit the HTML.
  • Using sound effect cues for interactive components that the screen reader lacks built-in behaviors for.
  • Creating a cloud-synced service that stores a user's personal collection of voice preferences and pronunciation substitutions.
  • Ability to set a companion voice to read specialized content such as transcribed interviews or code examples.
  • Emoting. Until we get something approaching EmotionML support, this could be a good way to approximate the emotive intent of the author (No, emoji don't count).
  • Spicing things up. If a user can't view a website's art direction, their experience relies on the skill of the writer or sound editor—on the internet this can sometimes leave a lot to be desired.
The reality of the situation

The CSS Speech Module document was last modified in March 2012. VoiceOver on iOS implements support using the following speak-as values for the speak property, as shown in this demo by accessibility consultant Paul J. Adam:

  • normal
  • digits
  • literal-punctuation
  • spell-out

Apparently, the iOS accessibility features Speak Selection and Speak Screen currently do not honor these properties.

Despite the fact that CSS 3 Speech Module has to be ratified (and therefore is still subject to change), VoiceOver support signals that a de facto standard has been established. The popularity of iOS—millions of devices, 76% of which run the latest version of iOS—makes implementation worth considering. For those who would benefit from the clarity provided by these declarations, it could potentially make a big difference.

Be inclusive, be explicit

Play to CSS' strengths and make small, surgical tweaks to website content to enhance the overall user experience, regardless of device or usage context. Start with semantic markup and a Progressive Enhancement mindset. Don't override pre-existing audio cues for the sake of vanity. Use iOS-supported speak-as values to provide clarity where VoiceOver's defaults need an informed suggestion.

Writing small utility classes and applying them to semantically neutral span tags wrapped around troublesome content would be a good approach. Here's a recording of VoiceOver reading this CodePen to demonstrate:

Take care to extensively test to make sure these tweaks don't impair other screen reading software. If you're not already testing with screen readers, there's no time like the present to get started!

Unfortunately, current support for CSS speech is limited. But learning what it can and can't do, and the situations in which it could be used is still vital for developers. Thoughtful and well-considered application of CSS is a key part of creating robust interfaces for all users, regardless of their ability or circumstance.

Let’s Talk About Speech CSS is a post from CSS-Tricks

Jekyll Includes are Cool

Css Tricks - Tue, 07/11/2017 - 2:04am

Dave Rupert:

When cruising through the Includes documentation I noticed a relatively new feature where you can pass data into a component.

I was similarly excited learning about Macros in Nunjucks. Then:

After a couple days of writing includes like this I thought to myself "Why am I not just writing Web Components?"

Direct Link to ArticlePermalink

Jekyll Includes are Cool is a post from CSS-Tricks

Designed Lines

Css Tricks - Tue, 07/11/2017 - 1:58am

Ethan Marcotte on digital disenfranchisement and why we should design lightning fast, accessible websites:

We're building on a web littered with too-heavy sites, on an internet that's unevenly, unequally distributed. That’s why designing a lightweight, inexpensive digital experience is a form of kindness. And while that kindness might seem like a small thing these days, it's a critical one. A device-agnostic, data-friendly interface helps ensure your work can reach as many people as possible, regardless of their location, income level, network quality, or device.

Direct Link to ArticlePermalink

Designed Lines is a post from CSS-Tricks

Glue Cross-Browser Responsive Irregular Images with Sticky Tape

Css Tricks - Mon, 07/10/2017 - 2:00am

I recently came across this Atlas of Makers by Vasilis van Gemert. Its fun and quirky appearance made me look under the hood and it was certainly worth it! What I discovered is that it was actually built making use of really cool features that so many articles and talks have been written about over the past few years, but somehow don't get used that much in the wild - the likes of CSS Grid, custom properties, blend modes, and even SVG.

SVG was used in order to create the irregular images that appear as if they were glued onto the page with the pieces of neon sticky tape. This article is going to explain how to recreate that in the simplest possible manner, without ever needing to step outside the browser. Let's get started!

The first thing we do is pick an image we start from, for example, this pawesome snow leopard:

The image we'll be using: a fluffy snow leopard.

The next thing we do is get a rough polygon we can fit the cat in. For this, we use Bennett Feely's Clippy. We're not actually going to use CSS clip-path since it's not cross-browser yet (but if you want it to be, please vote for it - no sign in required), it's just to get the numbers for the polygon points really fast without needing to use an actual image editor with a ton of buttons and options that make you give up before even starting.

We set the custom URL for our image and set custom dimensions. Clippy limits these dimensions based on viewport size, but for us, in this case, the actual dimensions of the image don't really matter (especially since the output is only going to be % values anyway), only the aspect ratio, which is 2:3 in the case of our cat picture.

Clippy: setting custom dimensions and URL.

We turn on the "Show outside clip-path" option on to make it easier to see what we'll be doing.

Clippy: turning on the "Show outside clip-path" option.

We then choose to use a custom polygon for our clipping path, we select all the points, we close the path and then maybe tweak some of their positions.

Clippy: selecting the points of a custom polygon that very roughly approximates the shape of the cat.

This has generated the CSS clip-path code for us. We copy just the list of points (as % values), bring up the console and paste this list of points as a JavaScript string:

let coords = '69% 89%, 84% 89%, 91% 54%, 79% 22%, 56% 14%, 45% 16%, 28% 0, 8% 0, 8% 10%, 33% 33%, 33% 70%, 47% 100%, 73% 100%';

We get rid of the % characters and split the string:

coords = coords.replace(/%/g, '').split(', ').map(c => c.split(' '));

We then set the dimensions for our image:

let dims = [736, 1103];

After that, we scale the coordinates we have to the dimensions of our image. We also round the values we get because we're sure not to need decimals for a rough polygonal approximation of the cat in an image that big.

coords = coords.map(c => c.map((c, i) => Math.round(.01*dims[i]*c)));

Finally, we bring this to a form we can copy from dev tools:

`[${coords.map(c => `[${c.join(', ')}]`).join(', ')}]`; Screenshot of the steps above in the dev tools console.

Now we move on to generating our SVG with Pug. Here's where we use the array of coordinates we got at the previous step:

- var coords = [[508, 982], [618, 982], [670, 596], [581, 243], [412, 154], [331, 176], [206, 0], [59, 0], [59, 110], [243, 364], [243, 772], [346, 1103], [537, 1103]]; - var w = 736, h = 1103; svg(viewBox=[0, 0, w, h].join(' ')) clipPath#cp polygon(points=coords.join(' ')) image(xlink:href='snow_derpard.jpg' width=w height=h clip-path='url(#cp)')

This gives us the irregular shaped image we've been after:

See the Pen by thebabydino (@thebabydino) on CodePen.

Now let's move on to the pieces of sticky tape. In order to generate them, we use the same array of coordinates. Before doing anything else at this step, we read its length so that we can loop through it:

-// same as before - var n = coords.length; svg(viewBox=[0, 0, w, h].join(' ')) -// same as before - for(var i = 0; i < n; i++) { - }

Next, within this loop, we have a random test to decide whether we have a strip of sticky tape from the current point to the next point:

- for(var i = 0; i < n; i++) { - if(Math.random() > .5) { path(d=`M${coords[i]} ${coords[(i + 1)%n]}`) - } - }

At first sight, this doesn't appear to do anything.

However, this is because the default stroke is none. Making this stroke visible (by setting it to an hsl() value with a randomly generated hue) and thicker reveals our sticky tape:

stroke: hsl(random(360), 90%, 60%); stroke-width: 5%; mix-blend-mode: multiply

We've also set mix-blend-mode: multiply on it so that overlap becomes a bit more obvious.

See the Pen by thebabydino (@thebabydino) on CodePen.

Looks pretty good, but we still have a few problems here.

The first and most obvious one being that this isn't cross-browser. mix-blend-mode doesn't work in Edge (if you want it, don't forget to vote for it). The way we can get a close enough effect is by making the stroke semitransparent just for Edge.

My initial idea here was to do this in a way that's only supported in Edge for now: using a calc() value whose result isn't an integer for the RGB components. The problem is that we have an hsl() value, not an rgb() one. But since we're using Sass, we can extract the RGB components:

$c: hsl(random(360), 90%, 60%); stroke: $c; stroke: rgba(calc(#{red($c)} - .5), green($c), blue($c), .5)

The last rule is the one that gets applied in Edge, but is discarded due to the calc() result in Chrome and simply due to the use of calc() in Firefox, so we get the result we want this way.

The second stroke rule seen as invalid in Chrome (left) and Firefox (right) dev tools.

However, this won't be the case anymore if the other browsers catch up with Edge here.

So a more future-proof solution would be to use @supports:

path { $c: hsl(random(360), 90%, 60%); stroke: rgba($c, .5); @supports (mix-blend-mode: multiply) { stroke: $c } }

The second problem is that we want our strips to expand a bit beyond their end points. Fortunately, this problem has a straightforward fix: setting stroke-linecap to square. This effectively makes our strips extend by half a stroke-width beyond each of their two ends.

See the Pen by thebabydino (@thebabydino) on CodePen.

The final problem is that our sticky strips get cut off at the edge of our SVG. Even if we set the overflow property to visible on the SVG, the container our SVG is in might cut it off anyway or an element coming right after might overlap it.

So what we can try to do is increase the viewBox space all around the image by an amount we'll call p that's just enough to fit our sticky tape strips.

-// same as before - var w1 = w + 2*p, h1 = h + 2*p; svg(viewBox=[-p, -p, w1, h1].join(' ')) -// same as before

The question here is... how much is that p amount?

Well, in order to get that value, we need to take into account the fact that our stroke-width is a % value. In SVG, a % value for something like the stroke-width is computed relative to the diagonal of the SVG region. In our case, this SVG region is a rectangle of width w and height h. If we draw the diagonal of this rectangle, we see that we can compute it using Pythagora's theorem in the yellow highlighted triangle.

The diagonal of the SVG rectangle can be computed from a right triangle where the catheti are the SVG viewBox width and height.

So our diagonal is:

- var d = Math.sqrt(w*w + h*h);

From here, we can compute the stroke-width as 5% of the diagonal. This is equivalent to multiplying the diagonal (d) with a .05 factor:

- var f = .05, sw = f*d;

Note that this is moving from a % value (5%) to a value in user units (.05*d). This is going to be convenient as, by increasing the viewBox dimensions we also increase the diagonal and, therefore, what 5% of this diagonal is.

The stroke of any path is drawn half inside, half outside the path line. However, we need to increase the viewBox space by more than half a stroke-width. We also need to take into account the fact that the stroke-linecap extends beyond the endpoints of the path by half a stroke-width:

The effect of stroke-width and stroke-linecap: square.

Now let's consider the situation when a point of our clipping polygon is right on the edge of our original image. We only consider one of the polygonal edges that have an end at this point in order to simplify things (everything is just the same for the other one).

We take the particular case of a strip along a polygonal edge having one end E on the top edge of our original image (and of the SVG as well).

Highlighting a polygonal edge which has one endpoint on the top boundary of the original image.

We want to see by how much this strip can extend beyond the top edge of the image in the case when it's created with a stroke and the stroke-linecap is set to square. This depends on the angle formed with the top edge and we're interested in finding the maximum amount of extra space we need above this top boundary so that no part of our strip gets cut off by overflow.

In order to understand this better, the interactive demo below allows us to rotate the strip and get a feel for how far the outer corners of the stroke creating this strip (including the square linecap) can extend:

See the Pen by thebabydino (@thebabydino) on CodePen.

As the demo above illustrates by tracing the position of the outer corners of the stroke including the stroke-linecap, the maximum amount of extra space needed beyond the image boundary is when the line segment between the endpoint on the edge E and the outer corner of the stroke including the linecap at that endpoint (this outer corner being either A or B, depending on the angle) is vertical and this amount is equal to the length of the segment.

Given that the stroke extends by half a stroke-width beyond the end point, both in the tangent and in the normal direction, it results that the length of this line segment is the hypotenuse of a right isosceles triangle whose catheti each equal half a stroke-width:

The segment connecting the endpoint to the outer corner of the stroke including the linecap is the hypotenuse in a right isosceles triangle where the catheti are half a stroke-width.

Using Pythagora's theorem in this triangle, we have:

- var hw = .5*sw; - var p = Math.sqrt(hw*hw + hw*hw) = hw*Math.sqrt(2);

Putting it all together, our Pug code becomes:

/* same coordinates and initial dimensions as before */ - var f = .05, d = Math.sqrt(w*w + h*h); - var sw = f*d, hw = .5*sw; - var p = +(hw*Math.sqrt(2)).toFixed(2); - var w1 = w + 2*p, h1 = h + 2*p; svg(viewBox=[-p, -p, w1, h1].join(' ') style=`--sw: ${+sw.toFixed(2)}px`) /* same as before */

while in the CSS we're left to tweak that stroke-width for the sticky strips:

stroke-width: var(--sw);

Note that we cannot make --sw a unitless value in order to then set the stroke-width to calc(var(--sw)*1px) - while in theory this should work, in practice Firefox and Edge don't yet support using calc() values for stroke-* properties.

The final result can be seen in the following Pen:

See the Pen by thebabydino (@thebabydino) on CodePen.

Glue Cross-Browser Responsive Irregular Images with Sticky Tape is a post from CSS-Tricks

If You’re Inlining SVG Icons, How Do You Deal With Unique Titles and IDs?

Css Tricks - Fri, 07/07/2017 - 3:01am

Just inlining SVG seems to be the easiest and most flexible icon system. But that chunk of <svg> might have a <title>, and you might be appying IDs to both of those elements for various reasons.

One of those reasons might be because you just want an ID on the icon to uniquely identify it for JavaScript or styling purposes.

Another of those reasons is that for accessibility, it's recommended you use aria-labelledby to connect the id and title, like:

<!-- aria-labelledby pointing to ID's of title and desc because some browsers incorrectly don't use them unless we do --> <svg role="img" viewBox="0 0 100 100" aria-labelledby="unique-title-id unique-desc-id"> <!-- title becomes the tooltip as well as what is read to assistive technology --> <!-- must be the first child! --> <title id="unique-title-id">Short Title (e.g. Add to Cart)</title> <!-- longer description if needed --> <desc id="unique-desc-id">A friendly looking cartoon cart icon with blinking eyes.</desc> <!-- all the SVG drawing stuff --> <path d="..." /> </svg>

But now you include that SVG somewhere twice. Say you're in Rails...

<%= render "/icons/icon.svg.erb" %> <p>yadda yadda yadda</p> <%= render "/icons/icon.svg.erb" %>

Now you'll have two elements on the page with the exact same ID, which is... bad?

It's definitely bad if you're relying on that ID for anything JavaScript related, because JavaScript will only find the first one and that might be confusing and weird.

I'm not entirely sure if it's bad for accessibility. Perhaps someone else can weigh in there. Assuming the titles are the same, my guess is that it won't matter much.

It's bad for HTML semantics, I suppose, but I'm always kinda meh on that if there are no repercussions.

If you're really interested in fixing this issue, my go-to would be to pass in the ID's to be used manually.

Again if you were in Rails, you could pass locals:

<%= render( partial: "parts/modules/search", locals: { svg_id: "my-icon", title_id: "my-icon-title", desc_id: "my-icon-desc" } ) %>

And then design the icons to use those locals, like

<svg title="<%= svg_id %> aria-labelledby="<%= title_id %>" ... > <title id="<%= title_id %>> ... </svg>

You could port that concept to any language. A React app could have:

<SVGIcon svg_id="..." title_id="..." />

A PHP app could set variables before an include:

$svg_id = "..."; $title_id = "..."; include("/icons/icon.svg.php");

Now it's just on you to manage IDs to make them unique like we've always done with IDs.

This little post was inspired by Austin Wolf, who had this problem and thought through some solutions. This also included auto-generating unique IDs:

<svg aria-labelledby="star-6c84fb90-12c4-11e1-840d-7b25c5ee775a"> <title id="star-6c84fb90-12c4-11e1-840d-7b25c5ee775a">star icon</title> // ... </svg>

That also seems like a good solution to me.

I'd be interested to hear more thoughts!

If You’re Inlining SVG Icons, How Do You Deal With Unique Titles and IDs? is a post from CSS-Tricks

Firebase & React Part 2: User Authentication

Css Tricks - Fri, 07/07/2017 - 2:21am

This is a follow up to the CSS-Tricks Article Intro to Firebase and React. In that lesson, we built Fun Food Friends, an application for planning your next potluck. It looked like this:

If you haven't completed that article yet, please complete that article first before attempting this one - it builds on the existing code from that application.

If you'd like to skip that article and dive right into this one, you can clone this repository which contains the finished version of the application from part one. Just don't forget that you'll need to create your own firebase database and swap in the credentials for that one, as well as run npm install before beginning! If you aren't sure how to do either of these things, take a look at part one before diving into this one.

What we'll be making

Today we'll be adding authentication to our Fun Food Friends app, so that only users that are signed in can view who is bringing what to the potluck, as well as be able to contribute their own items. When signed out, it will look like this:

When users are not signed in, they will be unable to see what people are bringing to the potluck, nor will they be able to add their own items.

When signed in, it will look like this:

Your name will be automatically added to the Add Item section, and your Google photo will appear in the bottom right-hand corner of the screen. You will also only be able to remove items you added to the potluck.

Before we Start: Get the CSS

I've added some additional CSS to this project in order to give a little bit of polish to the app. Grab it from here and paste it right into `src/App.css`!

Getting Started: Enabling Google Authentication on our Firebase Project

Start by logging in to Firebase Console and visiting your database's Dashboard. Then click on the Authentication tab. You should see something that looks like this:

Click on the Sign-In Method tab:

Firebase can handle authentication by asking the user for an email and password, or it can take advantage of third-party providers such as Google and Twitter in order to take care of authentication and authentication flow for you. Remember when you first logged in to Firebase, it used your Google credentials to authenticate you? Firebase allows you to add that feature to apps that you build.

We're going to user Google as our authentication provider for this project, primarily because it will make handling our authentication flow very simple: we won't have to worry about things like error handling and password validation since Google will take care of all of that for us. We also won't have to build any UI components (other than a login and logout button) to handle auth. Everything will be managed through a popup.

Hover over Google, select the pencil on the right hand side of the screen, and click Enable in the box that appears. Finally, hit Save.

Now, click Database on the left hand side of the screen, and head to the rules panel. It should look something like this right now:

In the first iteration of our fun food friends app, anyone could read and write to our database. We're going to change this so that only users that are signed in can write to the database. Change your rules so that it looks like this, and hit Publish:

{ "rules": { ".read": "auth != null", ".write": "auth != null" } }

These rules tell Firebase to only allow users who are authenticated to read and write from the database.

Preparing Our App to Add Authentication

Now we're going to need to go back to our `firebase.js` file and update our configuration so that we'll be able to use Google as our third party authentication Provider. Right now your `firebase.js` should look something like this:

import firebase from 'firebase' const config = { apiKey: "AIzaSyDblTESEB1SbAVkpy2q39DI2OHphL2-Jxw", authDomain: "fun-food-friends-eeec7.firebaseapp.com", databaseURL: "https://fun-food-friends-eeec7.firebaseio.com", projectId: "fun-food-friends-eeec7", storageBucket: "fun-food-friends-eeec7.appspot.com", messagingSenderId: "144750278413" }; firebase.initializeApp(config); export default firebase;

Before the export default firebase, add the following:

export const provider = new firebase.auth.GoogleAuthProvider(); export const auth = firebase.auth();

This exports the auth module of Firebase, as well as the Google Auth Provider so that we'll be able to use Google Authentication for sign in anywhere inside of our application.

Now we're ready to start adding authentication! Let's head over to `app.js`. First, let's import the auth module and the Google auth provider so that we can use them inside of our app component:

Change this line:

import firebase from './firebase.js';

to:

import firebase, { auth, provider } from './firebase.js';

Now, inside your App's constructor, let's start by carving out a space in our initial state that will hold all of our signed in user's information.

class App extends Component { constructor() { super(); this.state = { currentItem: '', username: '', items: [], user: null // <-- add this line }

Here we set the default value of user to be null because on initial load, the client has not yet authenticated with Firebase and so, on initial load, our application should act as if they are not logged in.

Adding Log In and Log Out

Now, let's add a log in and log out button to our render component so that the user has some buttons they can click to log in to our application:

<div className="wrapper"> <h1>Fun Food Friends</h1> {this.state.user ? <button onClick={this.logout}>Log Out</button> : <button onClick={this.login}>Log In</button> } </div>

If the value of user is truthy, then it means that the user is currently logged in and should see the logout button. If the value of user is null, it means that the user is currently logged out and should see the log in button.

The onClick of each of these buttons will point to two functions that we will create on the component itself in just a second: login and logout .

We'll also need to bind these functions in our constructor, because eventually we will need to call this.setState inside of them and we need access to this:

constructor() { /* ... */ this.login = this.login.bind(this); // <-- add this line this.logout = this.logout.bind(this); // <-- add this line }

The login method, which will handle our authentication wth Firebase, will look like this:

handleChange(e) { /* ... */ } logout() { // we will add the code for this in a moment, but need to add the method now or the bind will throw an error } login() { auth.signInWithPopup(provider) .then((result) => { const user = result.user; this.setState({ user }); }); }

Here we call the signInWithPopup method from the auth module, and pass in our provider (remember this refers to the Google Auth Provider). Now, when you click the 'login' button, it will trigger a popup that gives us the option to sign in with a Google account, like this:

signInWithPopup has a promise API that allows us to call .then on it and pass in a callback. This callback will be provided with a result object that contains, among other things, a property called .user that has all the information about the user who has just successfully signed in - including their name and user photo. We then store this inside of the state using setState.

Try signing in and then checking the React DevTools - you'll see the user there!

It's you! This will also contain a link to your display photo from Google, which is super convenient as it allows us to include some UI that contains the signed in user's photo.

The logout method is incredibly straightforward. After the login method inside your component, add the following method:

logout() { auth.signOut() .then(() => { this.setState({ user: null }); }); }

We call the signOut method on auth, and then using the Promise API we remove the user from our application's state. With this.state.user now equal to null, the user will see the Log In button instead of the Log Out button.

Persisting Login Across Refresh

As of right now, every time you refresh the page, your application forgets that you were already logged in, which is a bit of a bummer. But Firebase has an event listener, onAuthStateChange, that can actually check every single time the app loads to see if the user was already signed in last time they visited your app. If they were, you can automatically sign them back in.

We'll do this inside of our componentDidMount, which is meant for these kinds of side affects:

componentDidMount() { auth.onAuthStateChanged((user) => { if (user) { this.setState({ user }); } }); // ...

When the user signs in, this checks the Firebase database to see if they were already previously authenticated. If they were, we set their user details back into the state.

Updating the UI to Reflect the User's Login

Now that our user's authentication details are being tracked successfully in our application's state and synced up with our Firebase database, there's only one step left - we need to link it up to our application's UI.

That way, only signed in users see the potluck list and have the ability to add new items. When a user is logged in, we see their display photo, their name is automatically populated into the 'Add Item' area, and they can only remove their own potluck items.

I want you to start by erasing what you previously had after the <header> inside of your app's render method - it'll be easier to add back each thing at a time. So your app component's render method should look like this.

render() { return ( <div className='app'> <header> <div className="wrapper"> <h1>Fun Food Friends</h1> {this.state.user ? <button onClick={this.logout}>Logout</button> : <button onClick={this.login}>Log In</button> } </div> </header> </div> ); }

Now we're ready to start updating the UI.

Show the User's Photo if Logged In, Otherwise Prompt User to Log In

Here we're going to wrap our application in a big old' ternary. Underneath your header:

<div className='app'> <header> <div className="wrapper"> <h1>Fun Food Friends</h1> {this.state.user ? <button onClick={this.logout}>Logout</button> : <button onClick={this.login}>Log In</button> } </div> </header> {this.state.user ? <div> <div className='user-profile'> <img src={this.state.user.photoURL} /> </div> </div> : <div className='wrapper'> <p>You must be logged in to see the potluck list and submit to it.</p> </div> } </div>

Now, when you click login, you should see this:

Show the Add Item Area and Pre-populate with the Signed in User's Login Name or E-mail <div> <div className='user-profile'> <img src={this.state.user.photoURL} /> </div> <div className='container'> <section className='add-item'> <form onSubmit={this.handleSubmit}> <input type="text" name="username" placeholder="What's your name?" value={this.state.user.displayName || this.state.user.email} /> <input type="text" name="currentItem" placeholder="What are you bringing?" onChange={this.handleChange} value={this.state.currentItem} /> <button>Add Item</button> </form> </section> </div> </div>

Here we set the value of our username field to this.state.user.displayName if it exists (sometimes users don't have their display name set), and if it doesn't we set it to this.state.user.email. This will lock the input and make it so that user's names or email are automatically entered into the Add Item field for them.

We'll also update the handleSubmit since we no longer rely on handleChange to set the user's name in the state, but can grab it right off of this.state.user:

handleSubmit(e) { // .... const item = { title: this.state.currentItem, user: this.state.user.displayName || this.state.user.email } // .... }

Your app should now look like this:

Displaying Potluck Items, and Giving the User the Ability to Only Remove Their Own

Now we'll add back our list of potluck items. We'll also add a check for each item to see if the user who is bringing the item matches the user who is currently logged in. If it does, we'll give them the option to remove that item. This isn't foolproof by far and I wouldn't rely on this in a production app, but it's a cool little nice-to-have we can add to our app:

<div className='container'> {/* .. */} <section className='display-item'> <div className="wrapper"> <ul> {this.state.items.map((item) => { return ( <li key={item.id}> <h3>{item.title}</h3> <p>brought by: {item.user} {item.user === this.state.user.displayName || item.user === this.state.user.email ? <button onClick={() => this.removeItem(item.id)}>Remove Item</button> : null} </p> </li> ) })} </ul> </div> </section> </div>

Instead of displaying the remove button for each item, we write a quick ternary that checks to see if the person who is bringing a specific item matches the user who is currently signed in. If there's a match, we provide them with a button to remove that item:

Here I can remove Pasta Salad, since I added it to the potluck list, but I can't remove potatoes (who brings potatoes to a potluck? My sister, apparently.)

And that's all there is to it! Adding authentication to a new (or existing) Firebase application is a snap. It's incredibly straightforward, can be added with minimal refactoring, and allows to persist authentication across page refresh.

It's important to note that this is a trivial application - you would want to add additional checks and balances for storing any kind of secure information. But for our application's simple purposes, it's a perfect fit!

Firebase & React Part 2: User Authentication is a post from CSS-Tricks

Local by Flywheel

Css Tricks - Thu, 07/06/2017 - 4:59am

I've switched all my local WordPress development over to Local by Flywheel. I heard about it from y'all when we did a poll not to long ago about local WordPress development. Bottom line: it's really good. It does everything you want it to, well, with zero hassle, and nothing more.

Running Multiple WordPress Installs (PHP, MySQL, Web Server)

That's kind of the whole point. Local by Flywheel spins up a local site for you with all the dependencies that WordPress needs. Just by picking a few options and giving the site a name, you've spun up a new WordPress install in a few seconds.

And it's ready to go!

Nice UI

Surely what Local by Flywheel is doing under the hood is quite complicated, but the UI for the app isn't at all. I'm a big fan of apps like this. The super clean UI makes everything feel so easy and simple, despite it actually being complex and powerful. Just the information and controls you need!

HTTPS (SSL)

The web is moving more and more toward all-HTTPS, which is fantastic. With all the setup options Local by Flywheel offers, you can get your production and development versions of your site pretty close. We should be taking that another step further and be working locally over HTTPS, if our production sites are.

Local by Flywheel doesn't just make it easy, it automatically sets up HTTPS for you! And of course, it just works. You probably want to trust that local certificate though to make it even smoother.

But wait! Don't follow my awkward and slightly complex instructions. There is a one-click button right in Local by Flywheel to trust the certificate.

Combining with CodeKit

For all my simple mostly-solo projects, I've long been a fan of having CodeKit watch the project, so I preprocess all my CSS and JavaScript, optimize my images, and all that good task runner stuff. That's easy, just point the CodeKit browser refreshing URL at the Local by Flywheel URL.

Migrating

Another one-click button I love in Local by Flywheel is the one that jumps you right to Sequel Pro.

This was mighty handy for me as I was migrating from a couple of different setups. For a zillion years I used MAMP, and configured Sequel Pro to be my database manager. Then for a bit, I switched over to Docker to manage my local WordPress stuff, which was fun and interesting but was ultimately wasn't as easy as I wanted it to be. I also used Sequel Pro when I was in that phase.

So I was able to really quickly export and import the databases where I needed them!

It's also worth mentioning that if you don't have an existing local setup you're migrating from, but do have a production site, I highly recommend WP DB Migrate Pro for yanking down that production database in an extremely painless fashion.

Live Link

As if that wasn't enough, they tossed in one more really cool little feature. One click on the "Live Link" feature, and it fires up an ngrok URL for you. That's a live-on-the-internet URL you can use to share your localhost. Send it to a client! Debug a mobile issue! Very cool.

TLDR: I'm a fan of Local by Flywheel!

Local by Flywheel is a post from CSS-Tricks

The Options for Programmatically Documenting CSS

Css Tricks - Thu, 07/06/2017 - 3:38am

I strongly believe that the documentation should be kept as close to the code as possible. Based on my experience, that's the only option that works well in the long term. External documents, notes, and wikis all eventually get outdated, forgotten, and lost.

Documentation is a topic that always bugs me. Working on poorly documented codebase is a ticking bomb. It makes the onboarding process a tedious experience. Another way to think of bad documentation is that it helps foster a low truck factor (that is, "the number of people on your team who have to be hit by a truck before the project is in serious trouble").

Recently I was on-boarded into a project with more than 1,500 pages of documentation written in… Microsoft Word. It was outdated and unorganized. A real disaster. There must be a better way!

I've talked about this documentation issue before. I scratched the surface not long ago here on CSS-Tricks in my article What Does a Well-Documented CSS Codebase Look Like? Now, let's drill down into the options for programmatically documenting code. Specifically CSS.

Similar to JSDoc, in the CSS world there are a couple of ways to describe your components right in the source code as /* comments */. Once code is described through comments like this, a living style guide for the project could be generated. I hope I've stressed enough the word living since I believe that's the key for successful maintenance. Based on what I've experienced, there are a number of benefits to documenting code in this way that you experience immediately:

  • The team starts using a common vocabulary, reducing communication issues and misunderstandings significantly.
  • The current state of your components visual UI is always present.
  • Helps transform front-end codebases into well-described pattern libraries with minimal effort.
  • Helpful as a development playground.

It's sometimes argued that a development approach focused on documentation is quite time-consuming. I am not going to disagree with that. One should always strive for a balance between building functionality and writing docs. As an example, in the team I'm currently on, we use an agile approach to building stuff and there are blocks of time in each sprint dedicated to completing missing docs.

Of course, there are times when working software trumps comprehensive documentation. That's completely fine, as long as the people responsible are aware and have a plan how the project will be maintained in the long run.

Now let's take a look at the most popular documentation options in CSS:

Knyle Style Sheets (KSS)

KSS is a documentation specification and style guide format. It attempts to provide a methodology for writing maintainable, documented CSS within a team. Most developers in my network use it due to its popularity, expressiveness, and simplicity.

The KSS format is human-readable and machine-parsable. Therefore, it is intended to help automate the creation of a living style guide.

Similar to JSDoc, in KSS, CSS components are described right in the source code as comments. Each KSS documentation block consists of three parts: a description of what the element does or looks like, a list of modifier classes or pseudo-classes and how they modify the element, and a reference to the element's position in the style guide. Here's how it looks:

// Primary Button // // Use this class for the primary call to action button. // Typically you'll want to use either a `<button>` or an `<a>` element. // // Markup: // <button class="btn btn--primary">Click Me</button> // <a href="#" class="btn btn--primary">Click Me</a> // // Styleguide Components.Buttons.Primary .btn--primary { padding: 10px 20px; text-transform: uppercase; font-weight: bold; bacgkround-color: yellow; }

Benjamin Robertson describes in details his experience with kss-node, which is a Node.js implementation of KSS. Additionally, there are a bunch of generators that use the KSS notation to generate style guides from stylesheets. A popular option worth mentioning is the SC5 Style Generator. Moreover, their documenting syntax is extended with options to introduce wrapper markup, ignore parts of the stylesheet from being processed, and other nice-to-have enhancements.

Other sometimes useful (but in my opinion mostly fancy) things are:

  • With the designer tool you can edit Sass, Less or PostCSS variables directly via the web interface.
  • There is a live preview of the styles on every device.

Who knows, they might be beneficial for some use-cases. Here's an interactive demo of SC5.

GitHub's style guide (Primer) is KSS generated.

Unlike the JavaScript world, where JSDoc is king, there are still a bunch of tools that don't use the KSS conventions. Therefore, let's explore two alternatives I know of, ranked based on popularity, recent updates and my subjective opinion.

MDCSS

If you're searching for a simple, concise solution, mdcss could be the answer. Here's an interactive demo. To add a section of documentation, write a CSS comment that starts with three dashes ---, like so:

/*--- title: Primary Button section: Buttons --- Use this class for the primary call to action button. Typically you'll want to use either a `<button>` or an `<a>` element ```example:html <button class="btn btn--primary">Click</button> <a href="#" class="btn btn--primary">Click Me</a> ``` */ .btn--primary { text-transform: uppercase; font-weight: bold; background-color: yellow; }

The contents of a section of documentation are parsed by Markdown and turned into HTML, which is quite nice! Additionally, the contents of a section may be automatically imported from another file, which is quite useful for more detailed explanations:

/*--- title: Buttons import: buttons.md ---*/

Each documentation object may contain a bunch of properties like title (of the current section), unique name, context, and a few others.

Some other tools that have been on my radar, with very similar functionalities are:

Nucleus

Nucleus is a living style guide generator for Atomic Design based components. Nucleus reads the information from DocBlock annotations.

Atomic Design is a guideline to write modular styles, projecting different levels of complexity on a (bio-) chemical scale. This results in low selector specificity and allows you to compose complex entities out of simple elements. If you're not fairly familiar with Atomic Design, the learning curve may look a bit overwhelming in the beginning. The entities for Nucleus include:

  • Nuclides: not directly useable on their own styles (mixins, settings, variables).
  • Atoms: single-class element or selector rules (buttons, links, headlines, inputs…).
  • Molecules: one or more nested rules, but each of them is not more than an Atom
  • Structures: the most complex types, may consist of multiple molecules or other Structures.
  • … and a few more.

The button example we use throughout this article here stands for an Atom - a very basic element of the stylesheet (single-class element or selector). To mark it as an Atom, we need to annotate it with the @atom tag, followed by the name of the component:

/** * @atom Button * @section Navigation > Buttons * @modifiers * .btn--primary - Use this class for the primary call to action button. * @markup * <button class="btn">Click me</button> * <button class="btn btn--primary">Click me</button> * <a href="#" class="btn btn--primary">Click Me</a> */ .btn--primary { text-transform: uppercase; font-weight: bold; bacgkround-color: yellow; }

Here's an interactive demo.

Conclusion

There is yet to be a clear winner in terms of a tool or a common syntax definition for programmatically documenting CSS.

On the one hand, it seems like KSS leads the group, so I'd say it's worth considering it for a long-term project. My gut feel is that it will last for a long time. On the other hand, different syntax options and tools like Nucleus and MDCSS look promising too. I would encourage you to try them on short-term projects.

It's important to note that all tools presented in this article might do the job well and seem scalable enough. So try them out and pick whatever makes the most sense to your team.

I'd appreciate it if you would share in comments below if you know or have experience with any of these or other tools worth knowing about!

The Options for Programmatically Documenting CSS is a post from CSS-Tricks

The Structure of an Elm Application

Css Tricks - Wed, 07/05/2017 - 2:06am

Most languages when they are in their infancy, tend to be considered "toy languages" and are only used for trivial or small projects. But this is not the case with Elm, where its true power shines in complex and large applications.

It is not only possible to build some parts of an application in Elm and integrate those components into a larger JS application, but it is also possible to build the entire application without touching any other language making it an excellent alternative to JS frameworks like React.

In this article, we will explore the structure of an Elm application using a simple site to manage plain-text documents as an example.

Article Series:
  1. Why Elm? (And How To Get Started With It)
  2. Introduction to The Elm Architecture and How to Build our First Application
  3. The Structure of an Elm Application (You are here!)

Some of the topics covered in this article here are:

  • Application architecture and how things flow through it.
  • How the init, update and view functions are defined in the example application.
  • API communication using Elm Commands.
  • Single-page routing.
  • Working with JSON data.

Those are the principal topics that you will encounter when building almost any type of application, and the same principles can be extended to larger projects just by adding the needed functionality without significative or fundamental changes.

To follow this article is recommended to clone the Github Repository in your computer so you can see the whole picture; this article explains everything in a descriptive way instead of being a step by step tutorial talking about syntax and other details, so the Elm Syntax page and the Elm Packages site can be very helpful for specific details about the code, like type signatures of the functions used.

About the Application

The example that we will describe in this article is a CRUD application for plain-text documents, with communication to an API via HTTP to interact with a database. Since the main topic of this article is the Elm application, the server will not be explained in detail; it is just a Node.js fake REST API that stores data in a simple JSON file, using json-server.

In the application, the main page contains an input to write the title of a new document, and below a list of previously created documents. When you click a document or create a new one, an edit page is shown where you can view, edit and add content.

Below the title of the document, there are two links: save and delete. When save is clicked, the current document is sent to the server as a PUT request. When the delete click is clicked, a DELETE request is sent to the current document id.

Documents are created sending a POST request containing the title of the new document and an empty content field, once the document is created, the application changes to edit mode and you can continue adding the text.

A Quick Tip on Working with the Source Files

When you work with Elm–and probably with any other language–you will have to handle external details other than the code itself. The main thing you would want to do is to automatize the commands for development: starting the API server, starting Elm Reactor, compile source files, and starting a file watcher to compile again on every change.

In this case, I have used Jake, which is a simple Node.js tool similar to Make. In the Jakefile I have included all the necessary commands and one default command that will run the other ones in parallel, so I can just execute jake default on the terminal/command line and everything will be up and running.

If you have a bigger project, you can also use more sophisticated tools like Gulp or Grunt.

After cloning the application and installing dependencies, you can execute the npm start command, that will start the API server, the Elm Reactor, and a file watcher that will compile Elm files each time something changes. You can see the compiled files at http://localhost:3000 and you can see the application in debug mode (with Elm Reactor) at http://localhost:8000.

The Application Architecture

In the previous article we introduced the idea of the Elm architecture, but to avoid complexity we presented a beginner version with the Html.beginnerProgram function. In this application, we use an extended version that allows us to include commands and subscriptions, although the principles remain the same.

The complete structure is as follows:

Now we have an Html.program function that accepts a record containing 4 functions:

  • init : ( Model, Cmd Msg ): The init function returns a tuple with the application model and a command carrying a message, which allows us to communicate with the external world and produce side-effects, we will use that command to get and send data to the API via HTTP.
  • update : Msg -> Model -> ( Model, Cmd Msg ): The update function takes two things, a message with all the possible actions in our application and a model containing the state of the application. It will return the same thing as the previous function but with updated values depending on the messages that we get.
  • view : Model -> Html Msg: The view function takes a model containing our application state and returns Html able to handle messages. Usually, it will contain a series of functions that resemble HTML that renders values from the Model.
  • subscriptions : Model -> Sub Msg: The subscriptions function takes a model and returns a subscription carrying a message. Each time a subscription receives something it will send a message that can be caught in the update function. We can subscribe to actions that can happen at any time, like the movement of the mouse or an event in the network.

You can have as many functions as you want, but at the end, everything will return to those four ones.

Source: https://guide.elm-lang.org/architecture/effects/

Behind the scenes, the Elm Runtime is handling the flow of our application, all we do is basically define what are the things that flow. First, we describe the initial state of the application, its data structure and a command that gets initially executed, then the views are shown based on that data, on each interaction, subscription event or command execution, a new message is sent to the update function and the cycle starts again, with a new model and/or a command being executed.

As you can see, we don't actually have to deal with any control flow in the application, being Elm a functional language, we just declare things.

Routing: The Navigation.program Function

The example application is composed of two main pages, the home page containing a list of previously created documents, and an edit page where you can view the document or edit it. But the transition between the two views happens without reloading the page, instead, just the needed data is fetched from the server and the view is updated, including the URL (back and forward buttons still work as expected).

To achieve this we have used two packages: Navigation and UrlParser. The first one handles all the navigation part, and the second one helps us to interpret the URL paths.

The navigation package provides a wrapper for the Html.program function that allows us to handle page locations, so you can see in the code that we are using Navigation.program instead, that is basically the same as the previous one but also accepts a message that we have called UrlChange, which is sent every time the browser changes of location, the message carries a value of type Navigation.Location that contains all the information that we might need including the path, which we can parse to select the right view to show.

The Init Function

The init function can be considered the entry point of the application, representing both the initial state (model) and any command that we want to execute when the application starts.

Type Definitions

We begin by defining the types of values that we will be using, starting with the Model type, which contains all the state of the application:

type alias Model = { currentLocation : Maybe Route , documents : List Document , currentDocument : Maybe Document , newDocument : Maybe String }
  • We store a currentDocument value of type Maybe Route that contains the current location on the page, we use this value to know what to show on the screen.
  • We have a list of documents called documents, where we store all the documents from the database. We don't need a Maybe value here; if we don't have any documents we can have just an empty list.
  • We also need a currentDocument value of type Maybe Document.It will contain Just Document when we open a document and Nothing if we are on the home page, this value is obtained when we request a specific document from the database.
  • Finally, we have newDocument which represents the title of a new document in form of a Maybe String, being Just String when there is something in the input field, otherwise, it is Nothing. This value is sent to the API when the form is sent.

Note: It might look unnecessary to have that value here, coming from JavaScript you might think that you could just get the value directly from the input element, but in Elm you have to define everything in the model; when you enter something into the input element, the model gets updated and when you send the form, the value in the model is sent via HTTP.

As you can see, in Model we are also using other type aliases, that we have to define, being Document and Route:

type alias Document = { id : Int , title : String , content : String } type Route = HomeRoute | DocumentRoute Int

First, we are defining a Document type alias to represent the structure of a document: an id value that contains an integer then the title and content, both being String.

We also create a union type that can group two or more related types, in this case, they will be useful for the navigation of the application. You can name them as you want. We only have two: one for the homepage called HomeRoute and another one for the edit view which is called DocumentRoute and it carries an integer that represents the id of the specific document requested.

Putting it Together

Once we have the types defined, we proceed to declare the init function, with its initial values.

init : Navigation.Location -> ( Model, Cmd Msg ) init location = ( { currentLocation = UrlParser.parsePath route location , documents = [] , currentDocument = Nothing , newDocument = Nothing } , getDocumentsCmd )

After introducing the navigation package, our init function now accepts a value of type Navigation.Location, which contains information from the browser about the current page location. We store that value in a location parameter so we can parse and save it as currentLocation, we use that value to know the correct view to show.

The currentLocation value is obtained using the parsePath function from the Navigation package, it accepts a parser function (of type Parser (a -> a) a) and a Location.

The stored value in currentLocation has a Maybe type. For example, if we have a /documents/12 path in our browser, we would get Just DocumentRoute 12.

The parser function that we have called route is built like this:

route : UrlParser.Parser (Route -> a) a route = UrlParser.oneOf [ UrlParser.map HomeRoute UrlParser.top , UrlParser.map DocumentRoute (UrlParser.s "document" </> UrlParser.int) ]

The most important parts being:

HomeRoute UrlParser.top

We basically create a relation, where HomeRoute is the type that we defined for the home route and UrlParser.top which represents the root (/) in the path.

Then we have:

DocumentRoute (UrlParser.s "document" </> UrlParser.int)

Where we have again a route type called DocumentRoute, and then (UrlParser.s "document" </> UrlParser.int) which represents a path like /document/<id>. The s function accepts a string, in this case document, and will match anything with document on it (like /document/…). Then we have a </> function that can be considered a representation of the slash character in the path (/), to separate the document part from the int value; the id of the document that we want to see.

The rest of our model consists of a list of documents, which by default is empty, although it's populated once the getDocumentCmd command finishes. There are also values for the current document and a new document, both being Nothing.

The Update Function

The update function works with a message and a model as input and a tuple with a new model and a command as output, usually, the output will depend on a message being processed.

In our application we have defined a message for each event:

  • When a new page location is requested.
  • When the page location changes.
  • When a new document title being entered in the input element.
  • When a new document is saved and created.
  • When all the documents in the database have been retrieved.
  • When a specific document is requested and retrieved.
  • When the title and content of a document that is being updated.
  • When a specific document is saved and retrieved.
  • When a document is deleted.

And this can be done using a union type:

type Msg = NewUrl String | UrlChange Navigation.Location | NewDocumentMsg String | CreateDocumentMsg | CreatedDocumentMsg (Result Http.Error Document) | GotDocumentsMsg (Result Http.Error (List Document)) | ReadDocumentMsg Int | GotDocumentMsg (Result Http.Error Document) | UpdateDocumentTitleMsg String | UpdateDocumentContentMsg String | SaveDocumentMsg | SavedDocumentMsg (Result Http.Error Document) | DeleteDocumentMsg Int | DeletedDocumentMsg (Result Http.Error String)

Some messages need to carry some extra information, and it can be defined next to the message name, for example, the NewUrl message has a String attached containing a new URL path.

Also, most of the messages can be found in pairs, especially messages that add a new command to the Runtime, one message is before the command is executed and the other one after it gets executed.

For example, when you delete a document, you send a DeleteDocumentMsg message with the id of the document to be deleted, then once the document is deleted a DeletedDocumentMsg message is sent containing the result of the HTTP call: a status value Http.Error and the result as a String.

As we will see next, the messages containing the result of a command should be pattern-matched for both of its values, either as an error or as a success value.

Once we have all the messages defined, we can start working on what we will do with each one. For this, we pattern-match the message, let's take the reading of a specific document as an example:

ReadDocumentMsg id -> ( model, getDocumentCmd id )

This will match the ReadDocumentMsg message containing an int (as per our type definition) named id.

Note The name of the int value is assigned when it is matched, before it gets matched the value is just something of type Int.

Then we return a tuple containing the model without any changes, but we also return a command to be executed, called getDocumentCmd and receives the id of the document as an input. Do not worry about the command definition yet, we will get into it below.

Now we need to match the message that is sent once we get the requested document:

GotDocumentMsg (Ok document) -> ( { model | currentDocument = Just document }, Cmd.none ) GotDocumentMsg (Err _) -> ( model, Cmd.none )

Remember that the GotDocumentMsg message carried a (Result Http.Error Document) value, so we have to match for its two possible values: if it succeeds and if it fails.

The first case here will match if the error is type Ok, meaning that there was no error, and the second value will be the retrieved document. Then we can return the tuple containing a modified model where the currentDocument value is the document we just got, preceded by a Just, because currentDocument has a type of Maybe. Also, now in the second part of the tuple, we indicate that we will not execute any command (Cmd.none).

In the second case, where there was an error, we match it with a value of type Err and we can use an _ as a placeholder for anything that could be there. In a real world application we could show an information box to the user informing about the error, but to avoid complexity in this example, we will just simply ignore it; so we return the model again without any changes and also we don't execute any command.

All of the other message matches follow the same pattern: they return a new model with the information carried by the message and/or they execute a command.

API Communication with Commands

Although Elm is a pure functional programming, we can still perform side effects like communicating via HTTP with a server, and this is done using Commands.

As we have seen previously, every time a message is matched, we return a tuple containing a new model and a command. A command is any function that returns a value of type Cmd.

Let's take a look at the command that we included in our init function, which performs a request to the server with all the documents in the database when the application is starting:

getDocumentsCmd : Cmd Msg getDocumentsCmd = let url = "http://localhost:3000/documents?_sort=id&_order=desc" request = Http.get url decodeDocuments in Http.send GotDocumentsMsg request

The two important parts of the function are its type declaration getDocumentsCmd : Cmd Msg and Http.send GotDocumentsMsg request in the in section.

The type means that it is a command and it carries a message too, it is obtained from the type that the Http.send function returns, which you can see in the package documentation.

In the body of the function, we can see the message that is sent once the request has been completed. For clarity purposes, we have created two variables: one with the URL of the API where the request is sent, and the other one containing the request itself that Http.send will send.

The request is built using the Http.get function, since we want to send a GET request to the server.

You can also notice a decodeDocuments function in there, it is a JSON decoder, we use it to transform the server response in JSON to an usable Elm value. We will see how the decoders used in this application are built in the next section.

The command to get a single document from the server is quite similar, since the Http.get function does most of the work for us to build the request. We just change the URL of the resource that we want, in this case using the id of the requested document.

But to send data to the server, the history is a little different; instead, we can build the request ourselves using the Http.request function.

Let's examine the function that sends a new document to the server:

createDocumentCmd : String -> Cmd Msg createDocumentCmd documentTitle = let url = "http://localhost:3000/documents" body = Http.jsonBody <| encodeNewDocument documentTitle expectedDocument = Http.expectJson decodeDocument request = Http.request { method = "POST" , headers = [] , url = url , body = body , expect = expectedDocument , timeout = Nothing , withCredentials = False } in Http.send CreatedDocumentMsg request

Again we have a function that returns a value of type Cmd Msg, but now we also take a value of type String, which is the title of the new document to be created.

Using the Http.request function, we pass a record as a parameter containing all the parts of the request, we are mainly interested in the following:

  • method: The HTTP method of the request, we previously used GET to get information from the server, but now that we are sending the information we use the method POST.
  • url: The API endpoint that receives the request.
  • body: The body of the request, containing the document that we want to add to the database in form of JSON. To build the body we use the Http.jsonBody function, that automatically adds a Content-Type: application/json header for us. This function expects a JSON value, that we produce using a JSON encoder and the title of the new article. We will see how the JSON encoder is implemented in the next section.
  • expect: Here we indicate how we should interpret the response of the request, in our case, we will get back the new document, so we use the Http.expectJson function to transform the response using our decodeDocument JSON decoder.

The Http.send function is practically the same as the one we mentioned previously; the only difference is that now we will send a CreatedDocumentMsg message once the document has been created.

The command to update a document is also very similar to the command to create a new document, the main differences being:

  • We send the data to a different API endpoint depending on the id of the document that we want to update.
  • The body is built with a complete document and is encoded to JSON using a different encoder.
  • The HTTP method used is PUT, which is the preferred method for making updates to existing resources.
  • We use the SavedDocumentMsg message once we receive a response.

Lastly, we have the deleteDocumentCmd command function. The principles still remain the same, but in this case, we will not send anything in the body of the request, so we use the Http.emptyBody. Also, we indicate that we expect a String value, but it does not really matter since we are not using it for anything in our application.

Working with JSON Values

In Elm, we can't use JSON directly in our code, nor we can use a simple parsing function like JSON.parse() as we do in JavaScript since we have to make sure that the data that we are handling is type-safe.

To use JSON in Elm, we have to decode the JSON value into an Elm value, and then we can work with it, we do this using JSON decoders. Also, the inverse is similar; to produce a JSON value we have to encode an Elm value using JSON encoders.

In our sample application, we have two decoders and two encoders. Let's analyze the decoders:

decodeDocuments : Decode.Decoder (List Document) decodeDocuments = Decode.list decodeDocument decodeDocument : Decode.Decoder Document decodeDocument = Decode.map3 Document (Decode.field "id" Decode.int) (Decode.field "title" Decode.string) (Decode.field "content" Decode.string)

A decoder function has to have a type Decoder (which in this case is Decode.Decoder because of the way we imported the JSON package.) In the signature is also indicated the type of the data in the decoder, the first one is a list of documents so the type is List Document and the second one is simply a document so it has a Document type (we defined this type at the beginning of the application).

As you can notice, we are actually composing these two encoders, because the first one decodes a list of documents, we can use the document decoder for the Decode.list function.

It is in the decodeDocument decoder where the real thing happens. We use the Decode.map3 function to decode a value of three fields: id, title and content, with their respective types, the result is then put into the Document type we defined at the beginning of the application to create the final value.

Note: Elm has eight mapping functions to handle JSON values, if you need more than that you can use the elm-decode-pipeline package, which allows building arbitrary decoders using the pipeline (|>) operator.

Now we can see how the two encoders are implemented:

encodeNewDocument : String -> Encode.Value encodeNewDocument title = let object = [ ( "title", Encode.string title ) , ( "content", Encode.string "" ) ] in Encode.object object encodeUpdatedDocument : Document -> Encode.Value encodeUpdatedDocument document = let object = [ ( "id", Encode.int document.id ) , ( "title", Encode.string document.title ) , ( "content", Encode.string document.content ) ] in Encode.object object

To encode a JavaScript object, we use the function Encode.object which accepts a list of tuples, each tuple containing the name of the key and the encoded value depending on their type, Encode.int and Encode.string in this case. Also, unlike Decoders, these functions always return a value of type Value.

Because we are creating a document with an empty content, the first encoder only needs the title of that document and we manually encode an empty content field right before sending it to the API. The second encoder accepts a complete document and just produces a JSON equivalent.

You can see more functions related to JSON in the Elm Packages site: Json.Decode and Json.Encode

The View Function

The view function, compared with the code of previous articles, remains pretty straightforward. The interesting change here is the way we show each page depending on the URL path.

First of all, we have a link that always points to the home page, and the way we do this—instead of using regular links—is by capturing the click event and we send a NewUrl message with the new path.

Because we are still using regular <a> elements in our application instead of buttons, we have created a custom event called onClickLink, which is the same as the onClick event but preventing the default behavior (preventDefault) of the clicked element.

The implementation of this event is as follows:

onClickLink : msg -> Attribute msg onClickLink message = let options = { stopPropagation = False , preventDefault = True } in onWithOptions "click" options (Decode.succeed message)

The important thing to note here is the use of the onWithOptions function, which allows us to add two options to the click event: stopPropagation and preventDefault. The option that does the trick here is preventDefault which prevents the default behavior of the <a> element.

Next, we have the implementation of the function that handles the page that is shown depending on the path in the URL:

page : Model -> Html Msg page model = case model.currentLocation of Just route -> case route of HomeRoute -> viewHome model DocumentRoute id -> case model.currentDocument of Just document -> viewDocument document Nothing -> div [] [ text "Nothing here…" ] Nothing -> div [] [ text "404 – Not Found" ]

Remember that we are storing the current location in a currentLocation variable in the model, so we can apply pattern-matching to that variable and show something depending on its value. In our example, we first check if the Maybe value is of type Just Route or Nothing, then if we have a route, we check if it is a HomeRoute or a DocumentRoute. For the first case we include the viewHome function which represents the content of the homepage, and for the second case, we pass the currentDocument value in the viewDocument function, which shows the selected document.

For each document entry, notice in the viewDocumentEntry function that we are sending again a NewUrl message with the link to the respective document using the onLinkClick event. This message is responsible for loading the corresponding document.

Finally, we can add inline CSS in each component by adding a function of type Attribute, using Html.Attributes.style, which has the following form:

myStyleFunction : Attribute Msg myStyleFunction = Html.Attributes.style [ ( "<property>", "<value>" ) ]

In the example application, we have the styles of some of the components, and other generic styles directly included in the HTML file where the application is embedded. You can choose between including CSS files directly as you would normally do on any website, or you can write them directly within the Elm source files. While the method shown in this example is quite simplistic, there is a specialized library for this, in case you need more control: Elm-css.

A Word on Subscriptions

Subscriptions can be a common thing in a lot of applications, and although we didn't use subscriptions in this example, their mechanism is quite simple: they allow us to listen for certain things to happen when we do not know when they are going to happen.

Let's see the basic structure of a subscription:

subscriptions : Model -> Sub Msg subscriptions model = WebSocket.listen "ws://echo.websocket.org" NewMessage

The first thing to mention is that all subscriptions have a type Sub, here we have Sub Msg because we are sending a message each time we receive something on the subscription.

The way this works is that the WebSocket.listen function creates a socket listener for the address ws://echo.websocket.org, and each time something arrives, the NewMessage message is sent, and in our update function, we can act properly to this message, as we have done previously (Thanks to the Elm Architecture).

Application Embedding

Now that we have seen how a complete application is constructed, it's time to see how we can include that application in an HTML file for distribution. Although Elm can generate HTML files, you can just generate JavaScript and include them by yourself, so you also have control over other things, like the styling.

In HTML you can include the following:

… <body> <main> <!-- The app is going to appear here --> </main> <script src="main.js"></script> <script> // Get the <main> element var node = document.getElementsByTagName('main')[0]; // Embed the Elm application in the <main> element var app = Elm.Main.embed(node); </script> </body> …

First, we include the compiled Elm file in .js in a <script> tag, then we get the element where the application is going to be rendered, in this case, the <main> element, and finally, we call Elm.Main.embed(<element>) where <element> is the HTML node we got previously.

And that's it.

Conclusion

Elm is a great alternative to JavaScript frameworks in building large web applications. Not only provides a default architecture to keep things in order but also provides all the nice things of a well-designed language for modern applications.

The topics covered in this article are found in most applications you will build, and enough information to get started building production sites, once you get used to these, the rest is just a matter of keep exploring.

Article Series:
  1. Why Elm? (And How To Get Started With It)
  2. Introduction to The Elm Architecture and How to Build our First Application
  3. The Structure of an Elm Application (You are here!)

The Structure of an Elm Application is a post from CSS-Tricks

The Tenth Fourth

Css Tricks - Tue, 07/04/2017 - 5:28am

We made it a decade! It's our tenth birthday! &#x1f389; This is an extra-special one, as we hit those double digits. Each year on July 4th we mark the occasion with a post. In that tradition, allow me to ramble on a bit about the past and present.

The very first post ever on this site was literally a CSS trick. It's a classic, too. "Header Text Image Replacement":

.headerReplacement { text-indent: -9999px; width: 600px; height: 100px; background: url(/path/to/your/image.jpg) #cccccc no-repeat; }

Funny, I just used that trick a couple of days ago.

The post is interesting to me for a number of reasons. For one, I certainly didn't come up with that technique. At the time, I was just learning CSS myself and writing down interesting stuff I'd come across and used in my own work. I think I felt like I learned it a little more deeply by writing it out as an explanation like that.

For another, at the time, I was entirely unaware of where a trick like that fit into CSS history and larger discussions about CSS and semantics and accessibility and all that. A year later, I started getting interested in stuff like that and did stuff like rounded up many possible techniques for image replacement. Ultimately, even making a "museum" for it.

Before I go too much further here, I gotta mention the fact that we just re-opened the shop in honor of this anniversary. We made up some nerdy web related T-Shirts, and would love it if you would pick one up to help support the site:

CSS-Tricks was a WordPress site running on PHP and MySQL back then. Today, it's... a WordPress site running on PHP and MySQL. Although WordPress was 2.0.1 back then and 4.8 now. PHP was 5.2 then and 7.1 now. MySQL 5.0 then and 5.7 now. All those seem fairly small version bumps for a 10-year span, but really they are quite significant technological advances.

Back then we made sites with HTML, CSS, and JavaScript. These days, sites are... HTML, CSS, and JavaScript.

I try to keep a design history the best I can. Let's do a little blast-to-the-past of header styles:

This one isn't to be underestimated: it takes serious work to keep a website running. There is always something that needs to be done.

  • There is always some bit of software that needs to be updated.
  • There is always some weird bug that needs attention.
  • There is always some business opportunity that needs work to get going.
  • There is always some part of the design that really needs a look.
  • There is always some SSL certificate to worry about.
  • There is always some server or DevOps thing to think about.

I have a whole section of my TODO's called "Site Work" that is full of things I need to get done around here. For example, right this second, I know there are some assets that are loading in a way I don't want them to and I need to look at it for performance reasons. I'd like to do some stuff with embedded Pens to make them a bit wider by default, but need to be careful not to screw up any layout. I know markdown is behaving weird in the forums for the 692nd time, and that a private forum is showing publicly in a place I don't want. That's like 5% of the list!

I shudder to think what would happen if all this work wasn't done constantly. The site would fall to pieces.

And that doesn't include what you might actually think is the hard work involved in running a website:

  • Writing new content
  • Editing submitted content
  • Updating old content
  • Managing the publishing schedule and planning future content
  • Community management
  • Promoting and marketing the site
  • Finding sponsors
  • Make sure sponsors are happy
  • Social media

If you do all that work, on both lists, the hope is that you just keep to keep on keeping on. Everyone gets paid for their effort. This is not a hockey-stick growth kind of site. It's a modest publication.

Speaking of slow growth, that's the deal:

That's not representative of just doing the same ol' same ol' year in and year out. That's representative of more and more people working on the site and more and more money being invested back into the site.

One interesting aspect of this is how the bulk of that traffic is generated by search. Of course, I have no problem with that. I'm very happy that this site shows up in search results and can be useful to people that way. At the same time, having an active readership is a very valuable thing. Not just people who show up in search, but people who read the site regularly like they do the news. Definitely a balance there. That's why we do things like invest in the newsletter, to make sure we have ways to read CSS-Tricks that come to you and are worth your time.

On a personal note, I'm still living in Milwaukee, back here after a 7-month stint in Miami. My fiance Miranda got a job down there at FIU and we took the opportunity to move down, skip the Wisconsin winter, and be close to our Florida friends. I don't post publicly all that much about personal life stuff, but this will be a huge year for me. The Miami move to and back was big! Miranda and I are getting married this summer! We're also expecting a baby in the fall! And we're also planning to move to Oregon in late summer! Crazy times. There almost couldn't possibly be more going on, especially factoring in all this running multiple businesses stuff and a fairly aggressive speaking schedule this year.

My main focus is CodePen, which has had a tremendous last year. After taking funding, hiring an amazing team, and releasing lots of big stuff, we've got ourselves to that wonderful spot all businesses desire: profitability. The roadmap of ideas on CodePen is absolutely never ending. I've never felt like we have more work in front of us as strongly as I do right now.

I'd like to give special thanks to all the sponsors that make the site possible. I can't thank every single one, but I will give a special shout out to Media Temple, who has been a long time sponsor and supporter of CSS-Tricks.

And of course the heartiest of thanks to all you readers, without whom there would be no reason to have a site at all. The discourse that happens here is top notch and I couldn't be happier to facilitate it. And lastly, as you likely know, this site is by front-end developers for front-end developers, so if you have something to say, feel free to reach out.

See the Pen Conways Fireworks by Ben Matthews (@tsuhre) on CodePen.

The Tenth Fourth is a post from CSS-Tricks

Repeatable, Staggered Animation Three Ways: Sass, GSAP and Web Animations API

Css Tricks - Tue, 07/04/2017 - 5:04am

Staggered animation, also known as "follow through" or "overlapping action" is one of the twelve Disney principles of animation as defined by Ollie Johnston and Frank Thomas in their 1981 book "The Illusion of Life". At its core, the concept deals with animating objects in delayed succession to produce fluid motion.

The technique doesn't only apply to cute character animations though. The Motion design aspect of a digital interface has significant implications on UX, user perception and "feel". Google even makes a point to mention staggered animation in its Motion Choreography page, as part of the Material Design guide:

While the topic of motion design is truly vast, I often find myself applying bits and pieces even in smallest of projects. During the design process of the Interactive Coke ad on Eko I was tasked with creating some animation to be shown as the interactive video is loading, and so this mockup was born:

At a first glance, this animation seems trivial to implement in CSS, but turns out that is not that case! While it might be simpler with GSAP and the shiny new Web Animations API, doing so with CSS requires a few tricks which I'm going to explain in this post. Why use CSS at all then? In this case — as the animation was meant to run while the user waits for assets to load, it didn't make much sense to load an animation library just to display a loading spinner.

First, a bit about the anatomy of the animation.

There are four circles, absolutely positioned within a container with overflow: hidden to frame and crop the edges of the two outermost circles. Why four and not three? Because the first one is offscreen, waiting to enter stage left and the last one exists the frame stage right. The other two are always in the frame. This way, the end state of the animation iteration looks exactly like its beginning state. Circle 1 takes circle 2's place, circle 2 takes circle 3's place and so on.

Here's the basic HTML:

<div id="container"> <span></span> <span></span> <span></span> <span></span> </div>

And the accompanying CSS:

#container { position: absolute; left: 50%; top: 50%; transform: translate(-50%, -50%); width: 160px; height: 40px; display: block; overflow: hidden; } span { width: 40px; height: 40px; border-radius: 50%; background: #4df5c4; display: inline-block; position: absolute; transform: translateX(0px); }

Let's try this out with a simple animation for each circle that translates X from 0 to 60 pixels:

See the Pen dot loader - no stagger by Opher Vishnia (@OpherV) on CodePen.

Looks kind of weird and robotic, right? That's because we're missing one major component: Staggered animation. That is, each circle's animation needs to start a bit after its predecessor. "No problem!", you might think to yourself, "let's use the animation-delay" property. "We'll give the 4th circle a value of 0s, the 3rd of 0.15s and so on". Alright, let's try that:

See the Pen dot loader - broken by Opher Vishnia (@OpherV) on CodePen.

Hmm… What just happened? The property animation-delay affects only the initial delay before the animations starts. It doesn't add additional delays between every iteration so the animation goes out of sync like in the following diagram:

Math to the rescue

To overcome this, I baked the delay into the animation. CSS keyframe animations are specified in percents, and with some calculation, you can use those to define how much delay should the animation include. For example, if you set an animation-duration of 1s, and specify your start keyframe at 0%, the same values at 20%, your end at 80% and the same end values at 100%, your animation will wait 0.2 seconds, run for 0.6 seconds, then wait for another 0.2 seconds.

In my case, I wanted each circle to wait with a stagger time of 0.15 seconds before performing the actual animation taking 0.5 seconds, with the entire process taking 1 second. This means that the 4th circle animation waits 0 seconds, then animates for 0.5 seconds and waits for another 0.5 seconds. The second circle waits 0.15 seconds, then animates 0.5 seconds and waits for 0.35 seconds and so forth.

To achieve this, you need four keyframes (or three keyframe pairs): 1 and 2 account for the stagger wait, 2 and 3 for the actual animation time while 3 and 4 account for the final wait. The "trick" is to understand how to convert the required timings into keyframe percentages, but that's a relatively simple calculation. For example, the 2nd circle needs to wait 0.15 * 2 = 0.3 seconds, then animate for 0.5 seconds. I know the total time for the animation is one second, so the keyframe percentages are calculated like so:

0s = 0% 0.3s = 0.3 / 1s * 100 = 30% 0.8s = (0.3 + 0.5) / 1s * 100 = 80% 1s = 100%

The end result looks something like this:

With the entire animation, including stagger time and wait baked into the CSS keyframes taking exactly one second, the animation doesn't go out of sync.

Luckily, Sass allows us automate this process with a simple for loop and some inline math, which ultimately compiles into a series of keyframe animations. This way you can manipulate the timing variables to experiment and test whatever works best for your animation:

@mixin createCircleAnimation($i, $animTime, $totalTime, $delay) { @include keyframes(circle#{$i}) { 0% { @include transform(translateX(0)); } #{($i * $delay)/$totalTime * 100}% { @include transform(translateX(0)); } #{($i * $delay + $animTime)/$totalTime * 100}% { @include transform(translateX(60px)); } 100% { @include transform(translateX(60px)); } } } $animTime: 0.5s; $totalTime: 1s; $staggerTime: 0.15s; @for $i from 0 through 3 { @include createCircleAnimation($i, $animTime, $totalTime, $staggerTime); span:nth-child(#{($i + 1)}) { animation: circle#{(3 - $i)} $totalTime infinite; left: #{$i * 60 - 60 }px; } } And voila — here's the final result

See the Pen dot loading animation - SASS stagger by Opher Vishnia (@OpherV) on CodePen.

There are two main caveats with this method:

First, you need to make sure the defined stagger time/animation time isn't too long that it overlaps the total animation time, otherwise the math (and the animation) will break.

Second, this method does generate some hefty amount of CSS code, especially if you're using Sass to emit all the prefixes for browser compatibility. In my example, I had only four items to animate, but if yours has more items, the amount of code generated might not be worth the effort, and you probably want to stick with JS based animation libraries such as GSAP. Still, doing this entirely in CSS is pretty cool.

Making life easier

To contrast the verbosity of the Sass solution, I'd like to show you how the same can be easily achieved with the use of GSAP's Timeline, and staggerTo function:

See the Pen dot loading animation - GSAP by Opher Vishnia (@OpherV) on CodePen.

There are two interesting bits here. First, the last parameter of staggerTo, which defines the wait time between animating elements is set to a negative value (-0.15). This allows the elements to stagger in reverse order (circle 4–3–2–1 instead of 1–2–3–4). Cool, huh?

Second, see the bit with tl.set({}, {}, "1");? What's this weird syntax all about? That's a neat hack to implement the wait time at the end each circle's animation. Essentially by setting an empty object to an empty object at time 1, the Timeline animation will now repeat after the 1-second mark, rather than after the circle animation had ended.

Looking forwards to the future

The Web Animations API is the new and exciting kid on the block, but out of scope for this article. I couldn't resist providing you with a sample implementation though, which uses the same math as the CSS implementation:

See the Pen dot loading animation - WAAPI by Opher Vishnia (@OpherV) on CodePen.

Was this helpful? Have you created some smooth animations using this technique? Let me know!

Repeatable, Staggered Animation Three Ways: Sass, GSAP and Web Animations API is a post from CSS-Tricks

Why Use a Third-Party Form Validation Library?

Css Tricks - Mon, 07/03/2017 - 1:08pm

We've just wrapped up a great series of posts from Chris Ferdinandi on modern form validation. It starts here. These days, browsers have quite a few built-in tools for handling form validation including HTML attributes that can do quite a bit on their own, and a JavaScript API that can do even more. Chris even showed us that with a litttttle bit more work we can get down to IE 9 support with ideal UX.

So what's up with third-party form validation libraries? Why would you use a library for something you can get for free?

You need deeper browser support.

All "modern" browsers + IE 9 down is pretty good, especially when you've accounted for cross-browser differences nicely as Chris did. But it's not inconcievable that you need to go even deeper.

Libraries like Parsley go down a smidge further, to IE 8.

You're using a JavaScript framework that doesn't want you touching the DOM yourself.

When you're working with a framework like React, you aren't really attaching event handlers or inserting anything into the DOM manually at all. You might be passing values to a form element via props and setting error data in state.

You can absolutely do all that with native form validation and native constraint validation, but it also makes sense why you might reach for an add-on designed for that. You might reach for a package like react-validation that helps out in that world. Or Formik, which looks to be built just for this kind of thing:

Likewise, there is:

The hope is that these libraries are nice and lightweight because the take advantage of the native API's when they can. I'll leave that to you to look into when you need to reach for this kind of thing.

You're compelled by the API.

One of the major reasons any framework enjoys success is because of API nicety. If it makes your code easy to understand and work on, that's a big deal.

Sometimes the nicest JavaScript API's start with HTML attributes for activation and configuration. Remember that native HTML constraint validation is all about HTML attributes controlling form validation, so ideally any third-party form validation library would use them in the spirit of progressive enhancement.

You're compelled by the integrations.

They promised you it "Works with Bootstrap!" and your project uses Bootstrap, so that seemed like a good fit. I get it, but this is my least favorite reason. In this context, the only thing Bootstrap would care about is a handful of class names and where you stick the divs.

It validates more than the browser offers.

The browser can validate if an email address is technically valid, but not if it bounces or not. The browser can validate if a zip code looks like a real zip code, but not if it actually exists or not. The browser can validate if a URL is a proper URL, but not if it resolves or not. A third-party lib could potentially do stuff like this.

You're compelled by fancy features.
  • Perhaps the library offers a feature where when there is an error, it scrolls the form to that first error.
  • Perhaps the library offers a captcha feature, which is a related concept to form validation, and you need it.
  • Perhaps the library offers an event system that you like. It publishes events when certain things happen on the form, and that's useful to you.
  • Perhaps the library not only validates the form, but creates a summary of all the errors to show the user.

These things are a little above and beyond straight up form validation. You could do all of this with native validation, but I could see how this would drive appoption of a third-party library.

You need translation.

Native browser validation messages (the default kind that come with the HTML attributes) are in the language that browser is in. So the French version of Firefox spits out messages in French, despite the language of the page itself:

Third-party form validation libraries can ship with language packs that help with this. FormValidation is an example.

Conclusion

I'm not recommending a form validation library. In fact, if anything, the opposite.

I imagine that third-party form validation libraries are going to fall away a bit as browser support and UX gets better and better for the native APIs.

Or (and I imagine many already do this), internally they start using native APIs more and more, then offer nice features on top of the validation itself.

Why Use a Third-Party Form Validation Library? is a post from CSS-Tricks

CSS is Awesome

Css Tricks - Mon, 07/03/2017 - 2:33am

I bought this mug recently for use at work. Being a professional web developer, I decided it would establish me as the office's king of irony. The joke on it isn't unique, of course. I've seen it everywhere from t-shirts to conference presentations.

Most of you reading this have probably encountered this image at least once. It's a joke we can all relate to, right? You try and do something simple with CSS, and the arcane ways in which even basic properties interact inevitably borks it up.

If this joke epitomizes the collective frustration that developers have with CSS, then at the risk of ruining the fun, I thought it would be interesting to dissect the bug at its heart, as a case study in why people get frustrated with CSS.

The problem

See the Pen CSS is Awesome by Brandon (@brundolf) on CodePen.

There are three conditions that have to be met for this problem to occur:

  • The content can't shrink to fit the container
  • The container can't expand to fit the content
  • The container doesn't handle overflow gracefully

In real-world scenarios, the second condition is most likely the thing that needs to be fixed, but we'll explore all three.

Fixing the content size

This is little bit unfair to the box's content because the word AWESOME can't fit on one line at the given font size and container width. By default, text wraps at white space and doesn't break up words. But let's assume for a moment that we absolutely cannot afford to change the container's size. Perhaps, for instance, the text is a blown-up header on a site that's being viewed on an especially small phone.

Breaking up words

To get a continuous word to wrap, we have to use the CSS property word-break. Setting it to break-all will instruct the browser to break up words if necessary to wrap text content within its container.

See the Pen CSS is Awesome: word-break by Brandon (@brundolf) on CodePen.

Other content

In this case, the only way to make the content more responsive was to enable word breaking. But there are other kinds of content that might be overflowing. If word-wrap were set to nowrap, the text wouldn't even wrap in-between words. Or, the content could be a block-level element, whose width or min-width is set to be greater than the container's width.

Fixing the container size

There are many possible ways the container element might have been forced to not grow. For example: width, max-width, and flex. But the thing they all have in common, is that the width is being determined by something other than its content. This isn't inherently bad, especially since there is no fixed height, which in most cases would cause the content to simply expand downwards. But if you run into a variation on this situation, it's worth considering whether you really need to be controlling the width, or whether it can be left up to the page to determine.

Alternatives to setting width

More often than not, if you set an element's width, and you set it in pixels, you really meant to set either min-width or max-width. Ask yourself what you really care about. Was this element disappearing entirely when it lacked content because it shrunk to a width of 0? Set min-width, so that it has dimension but still has room to grow. Was it getting so wide that a whole paragraph fit on one line and was hard to read? Set max-width, so it won't go beyond a certain limit, but also won't extend beyond the edge of the screen on small devices. CSS is like an assistant: you want to guide it, not dictate its every move.

Overflow caused by flexbox

If one of your flex items has overflowing content, things get a little more complicated. The first thing you can do is check if you're specifying its width, as in the previous section. If you aren't, probably what's happening is the element is "flex-shrinking". Flex items first get sized following the normal rules; width, content, etc. The resulting size is called their flex-basis (which can also be set explicitly with a property of the same name). After establishing the flex basis for each item, flex-grow and flex-shrink are applied (or flex, which specifies both at once). The items grow and shrink in a weighted way, based on these two values and the container's size.

Setting flex-shrink: 0 will instruct the browser that this item should never get smaller than its flex basis. If the flex basis is determined by content (the default), this should solve your problem. be careful with this, though. You could end up running into the same problem again in the element's parent. If this flex item refuses to shrink, even when the flex container is smaller than it, it'll overflow and you're back to square one.

Handling overflow

Sometimes there's just no way around it. Maybe the container width is limited by the screen size itself. Maybe the content is a table of data, with rows that can't be wrapped and columns that can't be collapsed any further. We can still handle the overflow more gracefully than just having it spill out wherever.

overflow: hidden;

The most straightforward solution is to hide the content that's overflowing. Setting overflow: hidden; will simply cut things off where they reach the border of the container element. If the content is of a more aesthetic nature and doesn't include critical info, this might be acceptable.

See the Pen CSS is Awesome: overflow:hidden by Brandon (@brundolf) on CodePen.

If the content is text, we can make this a little more visually appealing by adding text-overflow: ellipsis;, which automatically adds a nice little "…" to text that gets cut off. It is worth noting, though, that you'll see slightly less of the actual content to make room for the ellipsis. Also note that this requires overflow: hidden; to be set.

See the Pen CSS is Awesome: ellipsis by Brandon (@brundolf) on CodePen.

overflow: auto;

The preferable remedy is usually going to be setting overflow-x: auto;. This gives the browser the go-ahead to add a scroll bar if the content overflows, allowing the user to scroll the container in that direction.

See the Pen CSS is Awesome: overflow:auto by Brandon (@brundolf) on CodePen.

This is a particularly graceful fallback, because it means that no matter what, the user will be able to access all of the content. Plus, the scrollbar will only appear if it's needed, which means it's not a bad idea to add this property in key places, even if you don't expect it to come into play.

Why does this conundrum resonate so universally with people who have used CSS?

CSS is hard because its properties interact, often in unexpected ways. Because when you set one of them, you're never just setting that one thing. That one thing combines and bounces off of and contradicts with a dozen other things, including default things that you never actually set yourself.

One rule of thumb for mitigating this is, never be more explicit than you need to be. Web pages are responsive by default. Writing good CSS means leveraging that fact instead of overriding it. Use percentages or viewport units instead of a media query if possible. Use min-width instead of width where you can. Think in terms of rules, in terms of what you really mean to say, instead of just adding properties until things look right. Try to get a feel for how the browser resolves layout and sizing, and make your changes and additions on top of that judiciously. Work with CSS, instead of against it.

Another rule of thumb is to let either width or height be determined by content. In this case, that wasn't enough, but in most cases, it will be. Give things an avenue for expansion. When you're setting rules for how your elements get sized, especially if those elements will contain text content, think through the edge cases. "What if this content was pared down to a single character? What if this content expanded to be three paragraphs? It might not look great, but would my layout be totally broken?"

CSS is weird. It's unlike any other code, and that makes a lot of programmers uncomfortable. But used wisely it can, in fact, be awesome.

CSS is Awesome is a post from CSS-Tricks

How To Rename a Font in CSS

Css Tricks - Sun, 07/02/2017 - 1:58am

Nothin' like some good ol' fashioned CSS trickery. Zach Leatherman documents how you can use @font-face blocks with local() sources to redefine a font-family. It can actually be a bit useful as well, by essentially being an abstraction for your font stack.

@font-face { font-family: My San Francisco Alias; src: local(system-ui), local(-apple-system), local('.SFNSText-Regular'); } p { font-family: My San Francisco Alias, fantasy; }

Direct Link to ArticlePermalink

How To Rename a Font in CSS is a post from CSS-Tricks

Full Page Screenshots in Browsers

Css Tricks - Fri, 06/30/2017 - 11:19am

It can be quite useful to get a "full page" screenshot in a browser. That is, not just the visible area. The visible area is pretty easy to get just by screenshotting the screen. A full page screenshot captures the entire web site even if it needs to be scrolled around to see all of it. You could take individual screenshots of the visible area and use a photo editing program to stitch them together, but that's a pain in the but. Nevermind the fact that it's extra tricky with things like fixed position elements.

Fortunately browsers can help us out a bit here.

Chrome

As of Chrome 59, it's built into DevTools. Here's a video. You use "Responsive Design Mode", then the menu option to get the full page screenshot is in the menu in the upper right.

If you need a "mobile" full length screenshot, just adjust the responsive view to the size you want and save again. Handy!

I've also had good luck with the Nimbus extension in Chrome.

Firefox

There is a setting in the Firefox DevTools that you need to turn on called Take a screenshot of the entire page under Available Toolbox Buttons. Flip that on, and you get a button.

Safari

Safari has File > Export as PDF, but it's pretty awkward. I have no idea how it decides what to export and what not to, the layout is weird, and it's broken into multiple pages for some reason.

The Awesome Screenshot extension seems to do the trick.

There are also some native apps like BrowseShot and Paparazzi!

Full Page Screenshots in Browsers is a post from CSS-Tricks

Five Huge CSS Milestones

Css Tricks - Fri, 06/30/2017 - 2:35am

CSS is over 20 years old now. I've only been using it for a little more than half that. In my experience, the biggest things to happen to CSS in that time were:

  1. Firebug
  2. Chrome
  3. CSS3
  4. Preprocessing
  5. Flexbox & Grid

And there is plenty more changes to come.

Direct Link to ArticlePermalink

Five Huge CSS Milestones is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.