Front End Web Development

Emphasizing Emphasis

Css Tricks - Tue, 10/30/2018 - 10:54am

I think Facundo Corradini is right here in calling out our tweet. If you're italicizing text because it should be styled that way (e.g. using italics to display a person's internal thought dialog, as illustrated in our example), then that's an <i> and not an <em>, because <em> is for stress emphasis — as in, a word you would emphasize with your voice, if spoken, to affect meaning.

Plus, I'm always down for long-form articles about the nuances of a handful of HTML elements!

Direct Link to ArticlePermalink

The post Emphasizing Emphasis appeared first on CSS-Tricks.

Preventing Suicide with UX: A Case Study on Google Search

Css Tricks - Tue, 10/30/2018 - 4:08am

I came clean about my long-running and ongoing battle with chronic depression last year in a blog post on my personal site. But don't worry, things are no worse than they were then and this post is about something else. This post is about designing empathetic user experiences.

Several other sites have been posting on this topic, whether it's how UX can improve mental health or how it can be used to prevent suicide.

It's the latter I'd like to focus on here. In fact, Lucas Chae did such a fantastic job on that post that you should stop reading this right now and head over there. Seriously, if you only have time to read one post today, that's the one.

Lucas goes to great lengths to show how UX can prevent suicide using Google search as a case study. I don't disagree with any of his ideas. I love them. But anything has room for improvement or at least refinement, and I have a few ideas I'd like to toss in the ring. Again, not because the ideas are bad or screaming to be fixed, but because I have a personal connection to this topic and am interested in seeing great ideas like this move forward.

Let's briefly rehash the context

I really hope you took up my suggestion to pop over to Lucas's post, but if you didn't the main gist of it goes:

  • Roughly 500,000 people submit suicide-related searches on Google every month.
  • Google returns some helpful information pinned to the top of the results, but it feels like a requirement more than a helpful interaction.
  • Lucas mocked up a proposed experience that replaces the existing pinned result with UI that feels more conversational and relatable.

There's a lot more nuance to it, but that's roughly what we're looking at. My four suggestions are merely standing on the shoulders of what's proposed there.

Gut check

At this point, you might be asking how much responsibility Google has in preventing suicide and whether it really needs to invest more into this than it already does. I like the way Lucas phrases it in his proposal:

“We are only a search engine, and cannot give you answers to your hardest questions. But we can help you get there.”

That is a good line to draw. And, yes, Google is technically already doing that to some extent, but this is an opportunity to provide recommendations that are as relevant as the search results that are Google's bread and butter.

It's also worth couching this entire diatribe with a note that there are zero expectations and absolutely no responsibility for Google to do anything proposed in this post. This is merely demonstrating the impact that UX design can have on a real-world problem using Google as the vehicle to drive these points home. We really could have done this with Bing, Yahoo, DuckDuckGo or any similar product.

Suggestion 1: Reach the user earlier

Lucas's work zeroes in on what Google delivers after the search query has been made. Thinking about this as a user flow, we're looking at something like this:

Step Description Expected Outcome Emotions 1 User is contemplating suicide, reaches for phone, computer Able to start up a device and fire up a browser to navigate to Google Hopeless
Sad
Lonely 2 Enters "How to kill myself" into the search field Possibly some related search suggestions, but ultimately a means to submit the search. Probably not much different than the previous step, but maybe some pensiveness over what search term will produce the best results. 3 Submits search A page refresh with relevant links Anxious
Scared

The majority of Lucas's work is driven by Step 3 (and beyond). I think we can narrow the gap of time between contemplation and search submission by zooming in on Step 2.

Have you ever typed a partial query into Google? The auto-suggestions are often extremely useful (at least to me) but they can be pretty hilarious as well. Take, for example:

I'm not sure why that was the first thing that came to mind as an example but, hey, it's still nuts that those are actual examples of user submissions and have made the list of predicted suggestions. I mean, of course, Russell Crowe is alive... right?

(Does some more searching.)

Right!

Funny (or not funny) enough, Google does not provide those suggestions for suicide-related searches. Or, more accurately, it weeds out suicide-related results and provides others until it simple can't suggest anything:

&#x1f602; LOL, did you catch "how to kill mysql process" in there?

I understand why Google would want to exclude suicidal terms from their suggestions. Why plant ideas in people's heads if that is not what the user is actually looking for? Seems like something to avoid.

But what if it's an opportunity rather than an obstacle? If we have a chance to interact with someone even a few seconds earlier in the process, then it's possible we're talking about saving some lives.

I would suggest embracing the prediction engine and even beefing it up to take sensitive searches head on. For example, what if we took Lucas's initial idea of interacting with the user right from here instead of waiting for a submitted search and its results?

Suggestion 2: Amp up the personalization

Let's all agree that Google knows way too much about people in general. I know, I know. Some people love it and see it as a potential force for good while others are wary of the impact a digital footprint can have IRL. I read The Circle, too.

Either way, Google likely knows a thing or two about the user, even if they are not logged in. But, assuming that the user is logged and has at least a partial profile (both of which I think are safe assumptions given Google's ubiquitous nature, prevalent reliance on it for oAuth, and that the user turned to it instead, say, Bing), then we can make a personal appeal to that user instead of generalized content.

For example, we can make use of avatars, names, locations, search history, etc. And, yes, the likelihood of Google having all of this among many, many (MANY!) other bits of data are extremely good — nay, great!

If we are going to utilize the predictive search feature, then we can put Google's treasure trove of user data into play to grab the user's attention and extend an invitation to engage before the search has even happened. That said, while I'm suggesting that we "amp" up the personalization, let's agree that any attempt to be too smart can also lead to poor user experiences and, at worst, even more personal pain.

So, while we have a treasure trove of data, keeping the scope to a personalized greeting or introduction will suffice.

Suggestion 3: Leverage Google's technical arsenal

The biggest criticism of Google's existing experience is that it feels like a requirement. I wholeheartedly agree with Lucas here. Just look.

And, yes, that is now in my search history.

What that uninviting and impersonal approach has going for it is that it provides the user with two clear paths to get help: phone and online chat. Google has developed products that make calls and power video chats, so there's no reason why we can't take these already great innovations and put them to use in this new context.

The proposed design from Lucas maintains a call link in the UI, but it seems buried beneath the core interaction and totally removes online chat. If Google has the technical means to apply one-on-one interactions that narrow geographical distances between hurt and help, and has influence to partner with suicide prevention groups and mental health professionals, then by all means! Those are things I would absolutely work into the UI.

Suggestion 4: Go further to maintain the Google brand

Lucas makes a stellar point that any improvement to the UX should be mindful of Google's own brand and positioning:

This is a redesign, not a new service that I am launching. One principle I value the most when redesigning an existing interface is respecting the original design principles. Design is never just about making it look pretty. Design is a manifestation of a company’s philosophy and core-values based on years of research and testing.

Amen!

Lucas absolutely nails the grid system, color palette, iconography and baseline card component that come straight from the Material Design guidelines. Still, I think there is room to be even more faithful to Google's design DNA.

There is a specific point where Lucas deviates from the card component guidelines — the UI that allows the user to categorize feelings.

The animation and general interaction is slick. However, it does feel off-brand... at least to me. We'll get to my mockups in a bit, but I wanted to make sure any new UI took the card component as far as it could go, always using established Google components. For example, I took the UI Lucas created for feeling categories and "dumbed" it down to literal card patterns as they're described in the docs.

OK, onto the mockups...

Looking past my lack of design chops, here's where I landed with everything we've covered so far.

Predictive search interface

The user has landed on google.com and is in the process of typing, "how to kill myself." Rather than disabling predictive suggestions like the current functionality does, we tackle the tough query head on and engage the user, making a personalized plea and making an offer to point the user to positive answers.

Notice that the "Continue" text link is in a disabled state. That's to add a little friction in the flow to encourage the user to engage with the slider. Of course, we're also creating a clear path to "Call Help" if the user does indeed need to talk with someone and bail on this UI.

Interacting with the user

What's the slider? Well, it's more or less an interpretation of the UI Lucas created that allows the user to provide more detail for the pain they're suffering:

I find that my proposed pattern is more in line with Google's Material Design guidelines, specifically with sliders.

The slider is a nice way for users to make a selection across a wide variety of options in a small space. In this case, we're allowing the user to slide between the same categories Lucas defined in his research and introducing sub-categories from there as text links.

One thing we want to make sure of is that the intent of the options in the slider are not misinterpreted. For example, is "Love & Relationships" first because it's the most important? It'd be a shame if that's the way it came across. One idea (among others, I'm sure) is to float a little information icon in there that displays a tooltip on hover explaining that the options are in no particular order.

I think the outcome here, again, is a nice way to get the same level of detail that Lucas mocked up into a smaller space while gaining valuable feedback from the user that helps us learn the context of their feelings.

The first step to help

Once the user has made a selection in the slider and clicked on a sub-category, we can provide the same encouraging, inspiring content Lucas proposed, perhaps pulled from real articles on the subject.

Note that we technically changed the state of the "Continue" text link from disabled to active in the last screen. We can use the context the user has provided so far to allow them to proceed with a much safer and productive search query based on the category/sub-category that is selected.

Additional guidance

Will the UX so far prevent a suicide? Maybe. I hope! But there's a chance it won't.

But, hey, we now have better context for the user's feelings and can provide a relevant path to suggest answers. So, if the user has chosen the category "Love & Relationships" and selected the "Death of a loved one" sub-category, then we can send the user to search results for that subject rather than "How to kill myself" — which would inevitably lead to a more destructive set of search results than something on love and relationships.

Google already does a pretty darn good job of this...

Seriously, say what you want about the lack of design flare, but having a featured result up top that the user can personally relate to, additional search suggestions, and the organic results at the end makes for a pretty compelling experience. A much better place to send the user!

The only change I would suggest is to maintain the ability to make a call to or initiate a chat with a trained professional. It's doesn't need to scream for attention in the UI, but be available. Material Design's banner component seems pretty relevant here, though I can see push back on that as far as the literal use case.

Are we making progress?

I give the greatest hat tip of all hat tips to Lucas Chae for broaching such a hard topic. Not only does he do a bang-up job to solve a real-world problem, but brings awareness to it. Obviously, it's something I'm able to relate to on a deep personal level and it's encouraging to see others both empathizing with it and pushing forward ideas to deal with it.

Thank you for that, Lucas.

I hope that by putting my own ideas on the table that we can keep this conversation moving. And that means getting even more ideas on the table and seeing where we can take this bad boy as a community.

So, do you have feedback on anything you've seen so far? Maybe ideas of your own you'd like to mock up and share? Please do! The web community is a great one and we can accomplish amazing things together. &#x1f4aa;

Note: My sincere thanks to Chris, Andy Bell and Eric Bailey for providing thoughtful, insightful and thorough feedback on earlier drafts of this post.

The post Preventing Suicide with UX: A Case Study on Google Search appeared first on CSS-Tricks.

Styled Payment Forms with Wufoo

Css Tricks - Tue, 10/30/2018 - 4:07am

(This is a sponsored post.)

Thanks so much to Wufoo for the support of CSS-Tricks! Wufoo is a form builder where you can quickly build forms of any complexity. From simple contact forms to multi-page logic-riddled application forms that sync to Salesforce and handle site-integrated exit surveys, it handles lots of use cases!

There is another powerful feature of Wufoo: taking payments. it's especially worth knowing about, in my opinion, because of how affordable it is. It's essentially eCommerce without costing you any fees on top of your paid Wufoo account and payment processing fees. Not to mention you can integrate them into your own site and style them however you like.

Say you were having a Pledge Drive. A Wufoo form can easily accept money with a form like this, which you can build yourself or find in the gallery and install:

Using the Theme Builder, we can do a lot to style it, including options for coloring, typography and more:

We have full CSS control as well, if you really wanna get hands-on:

Then attach your payment processor to it. Wufoo supports all the biggest and best payment processors like Square, PayPal, Stripe, Authorize.net, etc.

I'd bet you could be up and running with a payment-enabled form in under and hour, all without having to deal with spam or security or any of that. Thanks Wufoo!

Direct Link to ArticlePermalink

The post Styled Payment Forms with Wufoo appeared first on CSS-Tricks.

The Three Types of Performance Testing

Css Tricks - Mon, 10/29/2018 - 12:00pm

We've been covering performance quite a bit — not just recently, but throughout the course of the year. Now, Harry Roberts weighs in by identifying three types of ways performance can be tested.

Of particular note is the first type of testing:

The first kind of testing a team should carry out is Proactive testing: this is very intentional and deliberate, and is an active attempt to identify performance issues.

This takes the form of developers assessing the performance impact of every piece of work they do as they’re doing it. The idea here is that we spot the problem before it becomes problematic. Prevention, after all, is cheaper than the cure. Capturing performance issues at this stage is much more preferable to spotting them after they’ve gone live.

I think about this type of performance all the time when I’m working on a team, although I’ve never had a name for it.

I guess what I’m always thinking about is how can we introduce front-end engineers into the design process as early as possible? I’ve found that the final product is much more performant in when front-end engineers and designers brainstorm solutions together. Perhaps collaborating on a performance checklist is a good place to start?

Direct Link to ArticlePermalink

The post The Three Types of Performance Testing appeared first on CSS-Tricks.

Voice-Controlled Web Visualizations with Vue.js and Machine Learning

Css Tricks - Mon, 10/29/2018 - 3:57am

In this tutorial, we’ll pair Vue.js, three.js and LUIS (Cognitive Services) to create a voice-controlled web visualization.

But first, a little context

Why would we need to use voice recognition? What problem might something like this solve?

A while ago I was getting on a bus in Chicago. The bus driver didn’t see me and closed the door on my wrist. As he started to go, I heard a popping sound in my wrist and he did eventually stop as the other passengers started yelling, but not before he ripped a few tendons in my arm.

I was supposed to take time off work but, typical for museum employees at that time, I was on contract and had no real health insurance. I didn’t make much to begin with so taking time off just wasn’t an option for me. I worked through the pain. And, eventually, the health of my wrist started deteriorating. It became really painful to even brush my teeth. Voice-to-text wasn't the ubiquitous technology that it is today, and the best tool then available was Dragon. It worked OK, but was pretty frustrating to learn and I still had to use my hands quite frequently because it would often error out. That was 10 years ago, so I’m sure that particular tech has improved significantly since then. My wrist has also improved significantly in that time.

The whole experience left me with a keen interest in voice-controlled technologies. What can we do if we can control the behaviors of the web in our favor, just by speaking? For an experiment, I decided to use LUIS, which is a machine learning-based service to build natural language through the use of custom models that can continuously improve. We can use this for apps, bots, and IoT devices. This way, we can create a visualization that responds to any voice — and it can improve itself by learning along the way.

GitHub Repo

Live Demo

Here’s a bird’s eye view of what we're building:

Setting up LUIS

We’ll get a free trial account for Azure and then go to the portal. We’ll select Cognitive Services.

After picking New → AI/Machine Learning, we’ll select "Language Understanding" (or LUIS).

Then we’ll pick out our name and resource group.

We’ll collect our keys from the next screen and then head over to the LUIS dashboard

It’s actually really fun to train these machines! We’ll set up a new application and create some intents, which are outcomes we want to trigger based on a given condition. Here’s the sample from this demo:

You may notice that we have a naming schema here. We do this so that it’s easier to categorize the intents. We’re going to first figure out the emotion and then listen for the intensity, so the initial intents are prefixed with either App (these are used primarily in the App.vue component) or Intensity.

If we dive into each particular intent, we see how the model is trained. We have some similar phrases that mean roughly the same thing:

You can see we have a lot of synonyms for training, but we also have the "Train" button up top for when we’re ready to start training the model. We click that button, get a success notification, and then we’re ready to publish. &#x1f600;

Setting up Vue

We’ll create a pretty standard Vue.js application via the Vue CLI. First, we run:

vue create three-vue-pattern # then select Manually... Vue CLI v3.0.0 ? Please pick a preset: default (babel, eslint) ? Manually select features # Then select the PWA feature and the other ones with the spacebar ? Please pick a preset: Manually select features ? Check the features needed for your project: ? Babel ? TypeScript ? Progressive Web App (PWA) Support ? Router ? Vuex ? CSS Pre-processors ? Linter / Formatter ? Unit Testing ? E2E Testing ? Pick a linter / formatter config: ESLint with error prevention only ESLint + Airbnb config ? ESLint + Standard config ESLint + Prettier ? Pick additional lint features: (Press <space> to select, a to toggle all, i to invert selection) ? ? Lint on save ? Lint and fix on commit Successfully created project three-vue-pattern. Get started with the following commands: $ cd three-vue-pattern $ yarn serve

This will spin up a server for us and provide a typical Vue welcome screen. We’ll also add some dependencies to our application: three.js, sine-waves, and axios. three.js will help us create the WebGL visualization. sine-waves gives us a nice canvas abstraction for the loader. axios will allow us a really nice HTTP client so we can make calls to LUIS for analysis.

yarn add three sine-waves axios Setting up our Vuex store

Now that we have a working model, let’s go get it with axios and bring it into our Vuex store. Then we can disseminate the information to all of the different components.

In state, we’ll store what we’re going to need:

state: { intent: 'None', intensity: 'None', score: 0, uiState: 'idle', zoom: 3, counter: 0, },

intent and intensity will store the App, intensity, and intents, respectively. The score will store our confidence (which is a score from 0 to 100 measuring how well the model thinks it can rank the input).

For uiState, we have three different states:

  • idle - waiting for the user input
  • listening - hearing the user input
  • fetching - getting user data from the API

Both zoom and counter are what we’ll use to update the data visualization.

Now, in actions, we’ll set the uiState (in a mutation) to fetching, and we’ll make a call to the API with axios using the generated keys we received when setting up LUIS.

getUnderstanding({ commit }, utterance) { commit('setUiState', 'fetching') const url = `https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/4aba2274-c5df-4b0d-8ff7-57658254d042` https: axios({ method: 'get', url, params: { verbose: true, timezoneOffset: 0, q: utterance }, headers: { 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': ‘XXXXXXXXXXXXXXXXXXX' } })

Then, once we’ve done that, we can get the top-ranked scoring intent and store it in our state.

We also need to create some mutations we can use to change the state. We’ll use these in our actions. In the upcoming Vue 3.0, this will be streamlined because mutations will be removed.

newIntent: (state, { intent, score }) => { if (intent.includes('Intensity')) { state.intensity = intent if (intent.includes('More')) { state.counter++ } else if (intent.includes('Less')) { state.counter-- } } else { state.intent = intent } state.score = score }, setUiState: (state, status) => { state.uiState = status }, setIntent: (state, status) => { state.intent = status },

This is all pretty straightforward. We’re passing in the state so that we can update it for each occurrence — with the exception of Intensity, which will increment the counter up and down, accordingly. We’re going to use that counter in the next section to update the visualization.

.then(({ data }) => { console.log('axios result', data) if (altMaps.hasOwnProperty(data.query)) { commit('newIntent', { intent: altMaps[data.query], score: 1 }) } else { commit('newIntent', data.topScoringIntent) } commit('setUiState', 'idle') commit('setZoom') }) .catch(err => { console.error('axios error', err) })

In this action, we’ll commit the mutations we just went over or log an error if something goes wrong.

The way that the logic works, the user will do the initial recording to say how they’re feeling. They’ll hit a button to kick it all off. The visualization will appear and, at that point, the app will continuously listen for the user to say less or more to control the returned visualization. Let’s set up the rest of the app.

Setting up the app

In App.vue, we’ll show two different components for the middle of the page depending on whether or not we’ve already specified our mood.

<app-recordintent v-if="intent === 'None'" /> <app-recordintensity v-if="intent !== 'None'" :emotion="intent" />

Both of these will show information for the viewer as well as a SineWaves component while the UI is in a listening state.

The base of the application is where the visualization will be displayed. It will show with different props depending on the mood. Here are two examples:

<app-base v-if="intent === 'Excited'" :t-config.a="1" :t-config.b="200" /> <app-base v-if="intent === 'Nervous'" :t-config.a="1" :color="0xff0000" :wireframe="true" :rainbow="false" :emissive="true" /> Setting up the data visualization

I wanted to work with kaleidoscope-like imagery for the visualization and, after some searching, found this repo. The way it works is that a shape turns in space and this will break the image apart and show pieces of it like a kaleidoscope. Now, that might sound awesome because (yay!) the work is done, right?

Unfortunately not.

There were a number of major changes that needed to be done to make this work, and it actually ended up being a massive undertaking, even if the final visual expression appears similar to the original.

  • Due to the fact that we would need to tear down the visualization if we decided to change it, I had to convert the existing code to use bufferArrays, which are more performant for this purpose.
  • The original code was one large chunk, so I broke up some of the functions into smaller methods on the component to make it easier to read and maintain.
  • Because we want to update things on the fly, I had to store some of the items as data in the component, and eventually as props that it would receive from the parent. I also included some nice defaults (excited is what all of the defaults look like).
  • We use the counter from the Vuex state to update the distance of the camera’s placement relative to the object so that we can see less or more of it and thus it becomes more and less complex.

In order to change up the way that it looks according to the configurations, we’ll create some props:

props: { numAxes: { type: Number, default: 12, required: false }, ... tConfig: { default() { return { a: 2, b: 3, c: 100, d: 3 } }, required: false } },

We’ll use these when we create the shapes:

createShapes() { this.bufferCamera.position.z = this.shapeZoom if (this.torusKnot !== null) { this.torusKnot.material.dispose() this.torusKnot.geometry.dispose() this.bufferScene.remove(this.torusKnot) } var shape = new THREE.TorusKnotGeometry( this.tConfig.a, this.tConfig.b, this.tConfig.c, this.tConfig.d ), material ... this.torusKnot = new THREE.Mesh(shape, material) this.torusKnot.material.needsUpdate = true this.bufferScene.add(this.torusKnot) },

As we mentioned before, this is now split out into its own method. We’ll also create another method that kicks off the animation, which will also restart whenever it updates. The animation makes use of requestAnimationFrame:

animate() { this.storeRAF = requestAnimationFrame(this.animate) this.bufferScene.rotation.x += 0.01 this.bufferScene.rotation.y += 0.02 this.renderer.render( this.bufferScene, this.bufferCamera, this.bufferTexture ) this.renderer.render(this.scene, this.camera) },

We’ll create a computed property called shapeZoom that will return the zoom from the store. If you recall, this will be updated as the user's voice changes the intensity.

computed: { shapeZoom() { return this.$store.state.zoom } },

We can then use a watcher to see if the zoom level changes and cancel the animation, recreate the shapes, and restart the animation.

watch: { shapeZoom() { this.createShapes() cancelAnimationFrame(this.storeRAF) this.animate() } },

In data, we’re also storing some things we’ll need for instantiating the three.js scene — most notably making sure that the camera is exactly centered.

data() { return { bufferScene: new THREE.Scene(), bufferCamera: new THREE.PerspectiveCamera(75, 800 / 800, 0.1, 1000), bufferTexture: new THREE.WebGLRenderTarget(800, 800, { minFilter: THREE.LinearMipMapLinearFilter, magFilter: THREE.LinearFilter, antialias: true }), camera: new THREE.OrthographicCamera( window.innerWidth / -2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / -2, 0.1, 1000 ),

There’s more to this demo, if you’d like to explore the repo or set it up yourself with your own parameters. The init method does what you think it might: it initializes the whole visualization. I’ve commented a lot of the key parts if you’re peeping at the source code. There’s also another method that updates the geometry that’s called — you uessed it — updateGeometry. You may notice a lot of vars in there as well. That’s because it’s common to reuse variables in this kind of visualization. We kick everything off by calling this.init() in the mounted() lifecycle hook.

It’s pretty fun to see how far you can get creating things for the web that don’t necessarily need any hand movement to control. It opens up a lot of opportunities!

The post Voice-Controlled Web Visualizations with Vue.js and Machine Learning appeared first on CSS-Tricks.

Sign Up vs. Signup

Css Tricks - Fri, 10/26/2018 - 8:17am

Anybody building a site in that requires users to create accounts is going to face this language challenge. You'll probably have this language strewed across your entire site, from prominent calls-to-action in your homepage hero, to persistent header buttons, to your documentation.

So which is correct? "Sign Up" or "Signup"? Let's try to figure it out.

With some light internet grammar research, the term "sign up" is a verbal phrase. As in, "sign" is a verb (describes an action) and "sign up" is a verb plus a complement — participial phrase, best I can tell. That sounds about right to me.

My best guess before looking into this was that "signup" isn't even a word at all, and more of a lazy internet mistake. Just like "frontend" isn't a word. It's either "front-end" (a compound adjective as in a front-end developer), or "front end" (as in, "Your job is to work on the front end.").

I was wrong, though. "Signup" is a noun. Like a thing. As in, "Go up the hallway past the water fountain and you'll see the signup on the wall." Which could certainly be a digital thing as well. Seems to me it wouldn't be wrong to call a form that collects a user's name and email address a "signup form."

"Sign-up" is almost definitely wrong, as it's not a compound word or compound adjective.

The fact that both "sign up" and "signup" are both legit words/phrases makes this a little tricky. Having a verbal phrase as a button seems like a solid choice, but I wouldn't call it wrong to have a button that said "Signup" since the button presumably links directly to a form in which you can sign up and that's the correct noun for it.

Let's see what some popular websites do.

Twitter goes with "Sign Up" and "Log in." We haven't even talked about the difference between "Log in" and "Login," but the difference is very much the same. Verbal phrase vs. noun. The only thing weird about Twitter's approach here is the capitalization of "Up" and the lowercase "in." Twitter seems giant enough that they must have thought of this and decided this intentionally, so I'd love to understand why because it looks like a mistake to my eyes.

Facebook, like Twitter, goes with "Sign Up" and "Log In."

Google goes with "Sign in" and "Create account." It's not terribly rare to see companies use the "Create" verb. Visiting Microsoft's Azure site, they used the copy "Create your account today" complemented with a "Start free" button. Slack uses "Sign in" and "Get Started."

I can see the appeal of going with symmetry. Zoom uses "SIGN IN" and "SIGN UP" with the use of all-caps giving a pass on having to decide which words are capitalized.

Figma goes the "Sign In" and "Sign up" route, almost having symmetry — but what's up with the mismatched capitalization? I thought, if anything, they'd go with a lowercase "i" because the uppercase "I" can look like a lowercase "L" and maybe that's slightly weird.

At CodePen, we rock the "Sign Up" and "Log In" and try to be super consistent through the entire site using those two phrases.

If you're looking for a conclusion here, I'd say that it probably doesn't matter all that much. There are so many variations out there that people are probably used to it and you aren't losing customers over it. It's not like many will know the literal definition of "Signup." I personally like active verb phrases — like "Sign Up," "Log In," or "Sign In" — with no particular preference for capitalization.

The post Sign Up vs. Signup appeared first on CSS-Tricks.

CSS-Tricks Chronicle XXXIV

Css Tricks - Fri, 10/26/2018 - 4:02am

Hey gang, time for another broad update about various goings on as we tend to do occasionally. Some various happenings around here, appearances on other sites, upcoming conferences, and the like.

I'm speaking at a handful of conferences coming up!

At the end of this month, October 29th-30th, I'll be speaking at JAMstack_conf. Ever since I went to a jQuery conference several million years ago (by my count), I've always had a special place in my heart for conferences with a tech-specific focus. Certainly this whole world of JAMstack and serverless can be pretty broad, but it's more focused than a general web design conference.

In December, I'll be at WordCamp US. I like getting to go to WordPress-specific events to help me stay current on that community. CSS-Tricks is, and always has been a WordPress site, as are many other sites I manage. I like to keep my WordPress development chops up the best I can. I imagine the Gutenburg talk will be hot and heavy! I'll be speaking as well, generally about front-end development.

Next Spring, March 4th-6th, I'll be in Seattle for An Event Apart !

Over on ShopTalk, Dave and I have kicked off a series of shows we're calling "How to Think Like a Front-End Developer."

I've been fascinated by this idea for a while and have been collecting thoughts on it. I have my own ideas, but I want to contrast them with the ideas of other front-end developers much more accomplished than myself! My goal is to turn all this into a talk that I can give toward the end of this year and next year. This is partially inspired by some posts we've published here over the years:

...as well other people's work, of course, like Brad Frost and Dan Mall's Designer/Developer Workflow, and Lara Schenck and Mandy Michael's thoughts on front-end development. Not to mention seismic shifts in the front-end development landscape through New JavaScript and Serverless.

I've been collecting these articles the best I can.

The ShopTalk series is happening now! A number of episodes are already published:

Speaking of ShopTalk, a while back Dave and I mused about wanting to redesign the ShopTalk Show website. We did all this work on the back end making sure all the data from our 350+ episodes is super clean and easy to work when, then I slapped a design on top of it that is honestly pretty bad.

Dan Mall heard us talk about it and reached out to us to see if he could help. Not to do the work himself... that would be amazing, but Dan had an even better idea. Instead, we would all work together to find a newcomer to design and have them work under Dan's direction and guidence to design the site. Here's Dan's intro post (and note that applications are now closed).

We're currently in the process of narrowing down the applicants and interviewing finalists. We're planning on being very public about the process, so not only will we hopefully be helping someone who could use a bit of a break into this industry, but we'll also help anyone else who cares to watch it happen.

I've recently had the pleasure of being a guest on other shows.

First up, I was on the Script & Style Show with David Walsh and Todd Gardner

I love that David has ressurected the name Script & Style. We did a site together quite a few years back with that same name!

I have a very short interview on Makerviews:

What one piece of advice would you give to other makers?

I'd say that you're lucky. The most interesting people I know that seem to lead the most fulfilling, long, and interesting lives are those people who do interesting things, make interesting things, and generally just engage with life at a level deeper than just skating by or watching.

And my (third?) appearance on Thundernerds:

Watch/Listen as we talk w @chriscoyier at @frontendconf 2018. We chat with Chris Coyier about his talk "The All-Powerful Front-End Developer" --> https://t.co/exGJ4sEsXE #CSS #developer #UX pic.twitter.com/C9ybTkK6Rb

— Thunder Nerds ⚡️ (@thundernerds) May 2, 2018

If you happen to live in Central Oregon, note that our BendJS meetups have kicked back up for the season. We've been having them right at our CodePen office and it's been super fun.

I haven't even gotten to CodePen stuff yet! Since my last chronicle, we've brought in a number of new employees, like Klare Frank, Cassidy Williams, and now Stephen Shaw. We're always chugging away at polishing and maintaining CodePen, building new features, encouraging community, and everything else that running a social coding site requires.

Oh and hey! CodePen is now a registered trademark, so I can do this: CodePen®. One of our latest user-facing features is pinned items. Rest assured, we have loads of other features that are in development for y'all that are coming soon.

If you're interested in the technology side of CodePen, we've dug into lots of topics lately on CodePen radio like:

The post CSS-Tricks Chronicle XXXIV appeared first on CSS-Tricks.

Continuous Integration: The What, Why and How

Css Tricks - Thu, 10/25/2018 - 4:34am

Not long ago, I had a novice understanding of Continuous Integration (CI) and thought it seemed like an extra process that forces engineers to do extra work on already large projects. My team began to implement CI into projects and, after some hands-on experience, I realized its great benefits, not only to the company, but to me, an engineer! In this post, I will describe CI, the benefits I’ve discovered, and how to implement it for free, and fast.

CI and Continuous Delivery (CD) are usually discussed together. Writing about both CI and CD within a post is a lot to write and read about all at once, so we’ll only discuss CI here. Maybe, I will cover CD in a future post. &#x1f609;

Table of Contents: What is CI?

Continuous Integration, as I understand it, is a pattern of programming combining testing, safety checks, and development practices to confidently push code from a development branch to production ready branch continuously.

Microsoft Word is an example of CI. Words are written into the program and checked against spelling and grammar algorithms to assert a document's general readability and spelling.

Why CI should be used everywhere

We’ve already touched on this a bit, but the biggest benefit of CI that I see is that it saves a lot of money by making engineers more productive. Specifically, it provides quicker feedback loops, easier integration, and it reduces bottlenecks. Directly correlating CI to company savings is hard because SaaS costs scale as the user base changes. So, if a developer wants to sell CI to the business, the formula below can be utilized. Curious just how much it can save? My friend, David Inoa, created the following demo to help calculate the savings.

See the Pen Continuous Integration (CI) Company Cost Savings Estimator by David (@davidinoa) on CodePen.

What really excites enough to scream to the top of the rooftops is how CI can benefit you and me as developers!

For starters, CI will save you time. How much? We’re talking hours per week. How? Oh, do I want to tell you! CI automatically tests your code and lets you know if it is okay to be merged in a branch that goes to production. The amount of time that you would spend testing your code and working with others to get code ready for production is a lot of time.

Then there’s the way it helps prevent code fatigue. It sports tools like Greenkeeper, which can automatically set up — and even merge — pull requests following a code review. This keeps code up-to-date and allows developers to focus on what we really need to do. You know, like writing code or living life. Code updates within packages usually only need to be reviewed for major version updates, so there’s less need to track every minor release for breaking changes that require action.

CI takes a lot of the guesswork out of updating dependencies that otherwise would take a lot of research and testing. No excuses, use CI!

When talking to developers, the conversation usually winds up something like:

"I would use CI but…[insert excuse]."

To me, that’s a cop out! CI can be free. It can also be easy. It’s true that the benefits of CI come with some costs, including monthly fees for tools like CircleCI or Greenkeeper. But that’s a drop in the bucket with the long-term savings it provides. It’s also true that it will take time to set things up. But it’s worth calling out that the power of CI can be used for free on open source projects. If you need or want to keep your code private and don’t want pay for CI tools, then you really can build your own CI setup with a few great npm packages.

So, enough with the excuses and behold the power of CI!

What problems does CI solve?

Before digging in much further, we should cover the use cases for CI. It solves a lot of issues and comes in handy in many situations:

  • When more than one developer wants to merge into a production branch at once
  • When mistakes are not caught or cannot be fixed before deployment
  • When dependencies are out of date
  • When developers have to wait extended periods of time to merge code
  • When packages are dependent on other packages
  • When a package is updated and must be changed in multiple place
CI tests updates and prevents bugs from being deployed. Recommended CI tools

Let’s look at the high level parts used to create a CI feedback loop with some quick code bits to get CI setup for any open source project today. We’ll break this down into digestible chunks.

Documentation

In order to get CI working for me right away, I usually set CI up to test my initial documentation for a project. Specifically, I use MarkdownLint and Write Good because they provide all the features and functionality I need to write tests for this part of the project.

The great news is that GitHub provides standard templates and there is a lot of content that can be copied to get documentation setup quickly. Read more about quickly setting up documentation and creating a documentation feedback loop.

I keep a package.json file at the root of the project and run a script command like this:

"grammar": "write-good *.md --no-passive", "markdownlint": "markdownlint *.md"

Those two lines allow me to start using CI. That’s it! I can now run CI to test grammar.

At this point, I can move onto setting up CircleCI and Greenkeeper to help me make sure that packages are up to date. We’ll get to that in just a bit.

Unit testing

Unit tests are a method for testing small blocks (units) of code to ensure that the expected behavior of that block works as intended.

Unit tests provide a lot of help with CI. They define code quality and provide developers with feedback without having to push/merge/host code. Read more about unit tests and quickly setting a unit test feedback loop.

Here is an example of a very basic unit test without using a library:

const addsOne = (num) => num + 1 // We start with 1 as an initial value const numPlus1 = addsOne(3) // Function to add 3 const stringNumPlus1 = addsOne('3') // Add the two functions, expect 4 as the value /** * console.assert * https://developer.mozilla.org/en-US/docs/Web/API/console/assert * @param test? * @param string * @returns string if the test fails **/ console.assert(numPlus1 === 4, 'The variable `numPlus1` is not 4!') console.assert(stringNumPlus1 === 4, 'The variable `stringNumPlus1` is not 4!')

Over time, it is nice to use libraries like Jest to unit test code, but this example gives you an idea of what we’re looking at.

Here’s an example of the same test above using Jest:

const addsOne = (num) => num + 1 describe('addsOne', () => { it('adds a number', () => { const numPlus1 = addsOne(3) expect(numPlus1).toEqual(4) }) it('will not add a string', () => { const stringNumPlus1 = addsOne('3') expect(stringNumPlus1 === 4).toBeFalsy(); }) })

Using Jest, tests can be hooked up for CI with a command in a package.json like this:

"test:jest": "jest --coverage",

The flag --coverage configures Jest to report test coverage.

Safety checks

Safety checks help communicate code and code quality. Documentation, document templates, linter, spell checkers, and type checker are all safety checks. These tools can be automated to run during commits, in development, during CI, or even in a code editor.

Safety checks fall into more than one category of CI: feedback loop and testing. I’ve compiled a list of the types of safety checked I typically bake into a project.

All of these checks may seem like another layer of code abstraction or learning, so be gentle on yourself and others if this feels overwhelming. These tools have helped my own team bridge experience gaps, define shareable team patterns, and assist developers when they're confused about what their code is doing.

  • Committing, merging, communicating: Tools like husky, commitizen, GitHub Templates, and Changelogs help keep CI running clean code and form a nice workflow for a collaborative team environment.
  • Defining code (type checkers): Tools like TypeScript define and communicate code interfaces — not only types!
  • Linting: This is the practice of ensuring that something matches defined standards and patterns. There’s a linter for nearly all programming languages and you’ve probably worked with common ones, like ESlint (JavaScript) and Stylelint (CSS) in other projects.
  • Writing and commenting: Write Good helps catch grammar errors in documentation. Tools like JSDoc, Doctrine, and TypeDoc assist in writing documentation and add useful hints in code editors. Both can compile into markdown documentation.

ESlint is a good example for how any of these types of tools are implemented in CI. For example, this is all that’s needed in package.json to lint JavaScript:

"eslint": "eslint ."

Obviously, there are many options that allow you to configure a linter to conform to you and your team’s coding standards, but you can see how practical it can be to set up.

High level CI setup

Getting CI started for a repository often takes very little time, yet there are plenty of advanced configurations we can also put to use, if needed. Let’s look at a quick setup and then move into a more advanced configuration. Even the most basic setup is beneficial for saving time and code quality!

Two features that can save developers hours per week with simple CI are automatic dependency updates and build testing. Dependency updates are written about in more detail here.

Build testing refers to node_modules installation during CI by running an install — for example, (npm install where all node_modules install as expected. This is a simple task and does fail. Ensuring that node_modules installs as expected saves considerable time!

Quick CI Setup

CI can be setup automatically for both CircleCI and Travis! If a valid test command is already defined in the repository's package.json, then CI can be implemented without any more configuration.

In a CI tool, like CircleCI or Travis, the repository can be searched for after logging in or authentication. From there, follow the CI tool's UI to start testing.

For JavaScript, CircleCI will look at test within a repository's package.json to see if a valid test script is added. If it is, then CircleCI will begin running CI automatically! Read more about setting up CircleCI automatically here.

Advanced configurations

If unit tests are unfinished, or if a more configuration is needed, a .yml file can be added for a CI tool (like CircleCI) where the execute runner scripts are made.

Below is how to set up a custom CircleCI configuration with JavaScript linting (again, using ESlint as an example) for a CircleCI.

First off, run this command:

mkdir .circleci && touch .circleci/config.yml

Then add the following to generated file:

defaults: &defaults working_directory: ~/code docker: - image: circleci/node:10 environment: NPM_CONFIG_LOGLEVEL: error # make npm commands less noisy JOBS: max <h3>https://gist.github.com/ralphtheninja/f7c45bdee00784b41fed version: 2 jobs: build: <<: *defaults steps: - checkout - run: npm i - run: npm run eslint:ci

After these steps are completed and after CircleCI has been configured in GitHub (more on that here), CircleCI will pick up .circleci/config.yml and lint JavaScript in a CI process when a pull request is submitted.

I created a folder with examples in this demo repository to show ideas for configuring CI with config.yml filesand you can reference it for your own project or use the files as a starting point.

The are more even more CI tools that can be setup to help save developers more time, like auto-merging, auto-updating, monitoring, and much more!

Summary

We covered a lot here! To sum things up, setting up CI is very doable and can even be free of cost. With additional tooling (both paid and open source), we can have more time to code, and more time to write more tests for CI — or enjoy more life away from the screen!

Here are some demo repositories to help developers get setup fast or learn. Please feel free to reach out within the repositories with questions, ideas or improvements.

The post Continuous Integration: The What, Why and How appeared first on CSS-Tricks.

The Most Flexible eSign API

Css Tricks - Thu, 10/25/2018 - 4:00am

(This is a sponsored post.)

With our robust SDK, super clean dashboard, detailed documentation, and world-class support, HelloSign API is one of the most flexible and powerful API on the market. Start building for free today.

Direct Link to ArticlePermalink

The post The Most Flexible eSign API appeared first on CSS-Tricks.

Demystifying JavaScript Testing

Css Tricks - Wed, 10/24/2018 - 12:09pm

Many people have messaged me, confused about where to get started with testing. Just like everything else in software, we work hard to build abstractions to make our jobs easier. But that amount of abstraction evolves over time, until the only ones who really understand it are the ones who built the abstraction in the first place. Everyone else is left with taking the terms, APIs, and tools at face value and struggling to make things work.

One thing I believe about abstraction in code is that the abstraction is not magic — it’s code. Another I thing I believe about abstraction in code is that it’s easier to learn by doing.

Imagine that a less seasoned engineer approaches you. They’re hungry to learn, they want to be confident in their code, and they’re ready to start testing. &#x1f44d; Ever prepared to learn from you, they’ve written down a list of terms, APIs, and concepts they’d like you to define for them:

  • Assertion
  • Testing Framework
  • The describe/it/beforeEach/afterEach/test functions
  • Mocks/Stubs/Test Doubles/Spies
  • Unit/Integration/End to end/Functional/Accessibility/Acceptance/Manual testing

So...

Could you rattle off definitions for that budding engineer? Can you explain the difference between an assertion library and a testing framework? Or, are they easier for you to identify than explain?

Here’s the point. The better you understand these terms and abstractions, the more effective you will be at teaching them. And if you can teach them, you’ll be more effective at using them, too.

Enter a teach-an-engineer-to-fish moment. Did you know that you can write your own assertion library and testing framework? We often think of these abstractions as beyond our capabilities, but they’re not. Each of the popular assertion libraries and frameworks started with a single line of code, followed by another and then another. You don’t need any tools to write a simple test.

Here’s an example:

const {sum} = require('../math') const result = sum(3, 7) const expected = 10 if (result !== expected) { throw new Error(`${result} is not equal to ${expected}`) }

Put that in a module called test.js and run it with node test.js and, poof, you can start getting confident that the sum function from the math.js module is working as expected. Make that run on CI and you can get the confidence that it won’t break as changes are made to the codebase. &#x1f3c6;

Let’s see what a failure would look like with this:

Terminal window showing an error indicating -4 is not equal to 10.

So apparently our sum function is subtracting rather than adding and we’ve been able to automatically detect that through this script. All we need to do now is fix the sum function, run our test script again and:

Terminal window showing that we ran our test script and no errors were logged.

Fantastic! The script exited without an error, so we know that the sum function is working. This is the essence of a testing framework. There’s a lot more to it (e.g. nicer error messages, better assertions, etc.), but this is a good starting point to understand the foundations.

Once you understand how the abstractions work at a fundamental level, you’ll probably want to use them because, hey, you just learned to fish and now you can go fishing. And we have some pretty phenomenal fishing polls, uh, tools available to us. My favorite is the Jest testing platform. It’s amazingly capable, fully featured and allows me to write tests that give me the confidence I need to not break things as I change code.

I feel like fundamentals are so important that I included an entire module about it on TestingJavaScript.com. This is the place where you can learn the smart, efficient way to test any JavaScript application. I’m really happy with what I’ve created for you. I think it’ll help accelerate your understanding of testing tools and abstractions by giving you the chance to implement parts from scratch. The (hopeful) result? You can start writing tests that are maintainable and built to instill confidence in your code day after day. &#x1f3a3;

The early bird sale is going on right now! 40% off every tier! The sale is going away in the next few days so grab this ASAP!

TestingJavaScript.com - Learn the smart, efficient way to test any JavaScript application.

P.S. Give this a try: Tweet what’s the difference between a testing framework and an assertion library? In my course, I’ll not only explain it, we’ll build our own!

The post Demystifying JavaScript Testing appeared first on CSS-Tricks.

Hand roll charts with D3 like you actually know what you’re doing

Css Tricks - Wed, 10/24/2018 - 3:42am

Charts! My least favorite subject besides Social Studies. But you just won't get very far in this industry before someone wants you to make a chart. I don't know what it is with people and charts, but apparently we can't have a civilization without a bar chart showing Maggie's sales for last month so by ALL MEANS — let's make a chart.

Yes, I know this is not how you would display this data. I’m trying to make a point here.

To prepare you for that impending "OMG I'm going to have to make a chart" existential crisis that, much like death, we like to pretend is never going to happen, I'm going to show you how to hand-roll your own scatter plot graph with D3.js. This article is heavy on the code side and your first glance at the finished code is going to trigger your "fight or flight" response. But if you can get through this article, I think you will be surprised at how well you understand D3 and how confident you are that you can go make some other chart that you would rather not make.

Before we do that, though, it's important to talk about WHY you would ever want to roll your own chart.

Building vs. Buying

When you do have to chart, you will likely reach for something that comes "out of the box." You would never ever hand-roll a chart. The same way you would never sit around and smash your thumb with a hammer; it's rather painful and there are more productive ways to use your hammer. Charts are rather complex user interface items. It's not like you're center-aligning some text in a div here. Libraries like Chart.js or Kendo UI have pre-made charts that you can just point at your data. Developers have spent thousands of hours perfecting these charts You would never ever build one of these yourself.

Or would you?

Charting libraries are fantastic, but they do impose a certain amount of restrictions on you…and sometimes they actually make it harder to do even the simple things. As Peter Parker's grandfather said before he over-acted his dying scene in Spiderman, "With great charting libraries, comes great trade-off in flexibility."

Toby never should have been Spiderman. FITE ME.

This is exactly the scenario I found myself in when my colleague, Jasmine Greenaway, and I decided that we could use charts to figure out who @horse_js is. In case you aren't already a big @horse_js fan, it’s a Twitter parody account that quotes people out of context. It's extremely awesome.

We pulled every tweet from @horse_js for the past two years. We stuck that in a Cosmos DB database and then created an Azure Function endpoint to expose the data.

And then, with a sinking feeling in our stomachs, we realized that we needed a chart. We wanted to be able to see what the data looked like as it occurred over time. We thought being able to see the data visually in a Time Series Analysis might help us identify some pattern or gain some insight about the twitter account. And indeed, it did.

We charted every tweet that @horse_js has posted in the last two years. When we look at that data on a scatter plot, it looks like this:

See the Pen wYxYNd by Burke Holland (@burkeholland) on CodePen.

Coincidentally, this is the thing we are going to build in this article.

Each tweet is displayed with the date on the x-axis, and the time of day on the y. I thought this would be easy to do with a charting library, but all the ones I tried weren't really equipped to handle the scenario of a date across the x and a time on the y. I also couldn't find any examples of people doing it online. Am I breaking new ground here? Am I a data visualization pioneer?

Probably. Definitely.

So, let's take a look at how we can build this breathtaking scatter plot using D3.

Getting started with D3

Here's the thing about D3: it looks pretty awful. I just want to get that out there so we can stop pretending like D3 code is fun to look at. It's not. There's no shame in saying that. Now that we've invited that elephant in the room to the tea party, allow me to insinuate that even though D3 code looks pretty bad, it's actually not. There's just a lot of it.

To get started, we need D3. I am using the CDN include for D3 5 for these examples. I'm also using Moment to work with the dates, which we’ll get to later.

https://cdnjs.cloudflare.com/ajax/libs/d3/5.7.0/d3.min.js https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.22.2/moment.min.js

D3 works with SVG. That’s what it does. It basically marries SVG with data and provides some handy pre-built mechanisms for visualization it — things such as axis. Or Axees? Axises? Whatever the plural of “axis” is. But for now, just know that it’s like jQuery for SVG.

So, the first thing we need is an SVG element to work with.

<svg id="chart"></svg>

OK. Now we’re ready to start D3’ing our way to data visualization infamy. The first thing we’re going to do is make our scatter plot a class. We want to make this thing as generic as possible so that we can re-use it with other sets of data. We’ll start with a constructor that takes two parameters. The first will be the class or id of the element we are about to work with (in our case that’s, #chart) and the second is an object that will allow us to pass in any parameters that might vary from chart-to-chart (e.g. data, width, etc.).

class ScatterPlot { constructor(el, options) { } }

The chart code itself will go in a render function, which will also require the data set we’re working with to be passed.

class ScatterPlot { constructor(el, options) { this.render(options.data); } render(data) { } }

The first thing we’ll do in our render method is set some size values and margins for our chart.

class ScatterPlot { constructor(el, options) { this.data = options.data || []; this.width = options.width || 500; this.height = options.height || 400; this.render(); } render() { let margin = { top: 20, right: 20, bottom: 50, left: 60 }; let height = this.height || 400; let width = (this.height || 400) - margin.top - margin.bottom; let data = this.data; } }

I mentioned that D3 is like jQuery for SVG, and I think that analogy sticks. So you can see what I mean, let’s make a simple SVG drawing with D3.

For starters, you need to select the DOM element that SVG is going to work with. Once you do that, you can start appending things and setting their attributes. D3, just like jQuery, is built on the concept of chaining, so each function that you call returns an instance of the element on which you called it. In this manner, you can keep on adding elements and attributes until the cows come home.

For instance, let’s say we wanted to draw a square. With D3, we can draw a rectangle (in SVG that’s a rect), adding the necessary attributes along the way.

See the Pen zmdpJZ by Burke Holland (@burkeholland) on CodePen.

NOW. At this point you will say, “But I don’t know SVG.” Well, I don’t either. But I do know how to Google and there is no shortage of articles on how to do pretty much anything in SVG.

So, how do we get from a rectangle to a chart? This is where D3 becomes way more than just “jQuery for drawing.”

??First, let’s create a chart. We start with an empty SVG element in our markup. We use D3 to select that empty svg element (called #chart?) and define its width and height as well as margins.

// create the chart this.chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom);

And here’s what it looks like:

See the Pen EdpOqy by Burke Holland (@burkeholland) on CodePen.

AMAZING! Nothing there. If you open the dev tools, you’ll see that there is something there. It’s just an empty something. Kind of like my soul.

That’s your chart! Let’s go about putting some data in it. For that, we are going to need to define our x and y-axis.

That’s pretty easy in D3. You call the axisBottom method. Here, I am also formatting the tick marks with the right date format to display.

let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y'));

I am also passing an “x” parameter to the axisBottom method. What is that? That is called a scale.

D3 scales

D3 has something called scales. Scales are just a way of telling D3 where to put your data and D3 has a lot of different types of scales. The most common kind would be linear — like a scale of data from 1 to 10. It also contains a scale just for time series data — which is what we need for this chart. We can use the scaleTime method to define a “scale” for our x-axis.

// define the x-axis let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y'));

D3 scales use some terminology that is slightly intimidating. There are two main concepts to understand here: domains and ranges.

  • Domain: The range of possible values in your data set. In my case, I’m getting the minimum date from the array, and the maximum date from the array. Every other value in the data set falls between these two endpoints — so those "endpoints" define my domain.
  • Range: The range over which to display your data set. In other words, how spread out do you want your data to be? In our case, we want it constrained to the width of the chart, so we just pass width as the second parameter. If we passed a value like, say, 10000, our data out over 10,000 pixels wide. If we passed no value at all, it would draw all of the data on top of itself all on the left-hand side of the chart... like the following image.

The y-axis is built in the same way. Only, for it, we are going to be formatting our data for time, not date.

// define y axis let minTimeValue = new Date().setHours(0, 0, 0, 0); let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M'));

The extra nice method call on the y scale tells the y-axis to format this time scale nicely. If we don’t include that, it won’t have a label for the top-most tick on the left-hand side because it only goes to 11:59:59 PM, rather than all the way to midnight. It’s a quirk, but we’re not making crap here. We need labels on all our ticks.

Now we’re ready to draw our axis to the chart. Remember that our chart has some margins on it. In order to properly position the items inside of our chart, we are going to create a grouping (g) element and set its width and height. Then, we can draw all of our elements in that container.

let main = this.chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main');

We’re drawing our container, accounting for margin and setting its width and height. Yes. I know. It’s tedious. But such is the state of laying things out in a browser. When was the last time you tried to horizontally and vertically center content in a div? Yeah, not so awesome prior to Flexbox and CSS Grid.

Now, we can draw our x-axis:

main.chart.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis);

We make a container element, and then “call” the xAxis that we defined earlier. D3 draws things starting at the top-left, so we use the transform attribute to offset the x-axis from the top so it appears at the bottom. If we didn’t do that, our chart would look like this...

By specifying the transform, we push it to the bottom. Now for the y-axis:

main.append('g') .attr('class', 'main axis date') .call(yAxis);

Let’s look at all the code we have so far, and then we’ll see what this outputs to the screen.

class ScatterPlot { constructor(el, options) { this.el = el; if (options) { this.data = options.data || []; this.tooltip = options.tooltip; this.pointClass = options.pointClass || ''; this.data = options.data || []; this.width = options.width || 500; this.height = options.height || 400; this.render(); } } render() { let margin = { top: 20, right: 15, bottom: 60, left: 60 }; let height = this.height || 400; let width = (this.width || 500) - margin.right - margin.left; let data = this.data; // create the chart let chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom); // define the x-axis let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y')); // define y axis let minTimeValue = new Date().setHours(0, 0, 0, 0); let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M')); // define our content area let main = chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main'); // draw x axis main.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis); // draw y axis main.append('g') .attr('class', 'main axis date') .call(yAxis); } }

See the Pen oaeybM by Burke Holland (@burkeholland) on CodePen.

We’ve got a chart! Call your friends! Call your parents! IMPOSSIBLE IS NOTHING!

??Axis labels

Now let’s add some chart labels. By now you may have figured out that when it comes to D3, you are doing pretty much everything by hand. Adding axis labels is no different. All we are going to do is add an svg text? element, set it’s value and position it. That’s all.
??
??For the x?-axis, we can add the text label and position it using translate?. We set it’s x? position to the middle (width / 2) of the chart. Then we subtract the left-hand margin to make sure we are centered under just the chart. I’m also using a CSS class for axis-label? that has a text-anchor: middle? to make sure our text is originating from the center of the text element.
??

????// text label for the x axis ??chart.append("text") ?? .attr("transform", ?? "translate(" + ((width/2) + margin.left) + " ," + ?? (height + margin.top + margin.bottom) + ")") ?? .attr('class', 'axis-label') ?? .text("Date Of Tweet");

??
??The y?-axis is the same concept — a text? element that we manually position. This one is positioned with absolute x? and y? attributes. This is because our transform? is used to rotate the label, so we use the x? and y? properties to position it.
??
??Remember: Once you rotate an element, x and y rotate with it. That means that when the text? element is on its side like it is here, y? now pushes it left and right and x? pushes it up and down. Confused yet? It’s OK, you’re in great company.
??

??// text label for the y-axis ??chart.append("text") ?? .attr("transform", "rotate(-90)") ?? .attr("y", 10) ?? .attr("x",0 - ((height / 2) + (margin.top + margin.bottom)) ?? .attr('class', 'axis-label') ?? .text("Time of Tweet - CST (-6)");

??
??

See the Pen oaeybM by Burke Holland (@burkeholland) on CodePen.

??Now, like I said — it’s a LOT of code. That’s undeniable. But it’s not super complex code. It’s like LEGO: LEGO blocks are simple, but you can build pretty complex things with them. What I’m trying to say is it’s a highly sophisticated interlocking brick system.

??Now that we have a chart, it’s time to draw our data.
??

Drawing the data points

This is fairly straightforward. As usual, we create a grouping to put all our circles in. Then we loop over each item in our data set and draw an SVG circle. We have to set the position of each circle (cx and cy) based on the current data item’s date and time value. Lastly, we set its radius (r), which controls how big the circle is.

let circles = main.append('g'); data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5); });

When we set the cx and cy values, we use the scale (x or y) that we defined earlier. We pass that scale the date or time value of the current data item and the scale will give us back the correct position on the chart for this item.

And, my good friend, we have a real chart with some real data in it.

See the Pen VEzdrR by Burke Holland (@burkeholland) on CodePen.

Lastly, let’s add some animation to this chart. D3 has some nice easing functions that we can use here. What we do is define a transition on each one of our circles. Basically, anything that comes after the transition method gets animated. Since D3 draws everything from the top-left, we can set the x position first and then animate the y. The result is the dots look like they are falling into place. We can use D3’s nifty easeBounce easing function to make those dots bounce when they fall.

data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .transition() .duration(Math.floor(Math.random() * (3000-2000) + 1000)) .ease(d3.easeBounce) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5);

OK, so one more time, all together now…

class ScatterPlot { constructor(el, options) { this.el = el; this.data = options.data || []; this.width = options.width || 960; this.height = options.height || 500; this.render(); } render() { let margin = { top: 20, right: 20, bottom: 50, left: 60 }; let height = this.height - margin.bottom - margin.top; let width = this.width - margin.right - margin.left; let data = this.data; // create the chart let chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom); // define the x-axis let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y')); // define y axis let minTimeValue = new Date().setHours(0, 0, 0, 0); let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M')); // define our content area let main = chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main'); // draw x axis main.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis); // draw y axis main.append('g') .attr('class', 'main axis date') .call(yAxis); // text label for the y axis ?? chart.append("text") ?? .attr("transform", "rotate(-90)") ?? .attr("y", 10) ?? .attr("x",0 - ((height / 2) + margin.top + margin.bottom) ?? .attr('class', 'axis-label') ?? .text("Time of Tweet - CST (-6)"); ?? ?? // draw the data points let circles = main.append('g'); data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .transition() .duration(Math.floor(Math.random() * (3000-2000) + 1000)) .ease(d3.easeBounce) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5); }); } }

We can now make a call for some data and render this chart...

// get the data let data = fetch('https://s3-us-west-2.amazonaws.com/s.cdpn.io/4548/time-series.json').then(d => d.json()).then(data => { // massage the data a bit to get it in the right format let horseData = data.map(item => { return item.horse; }) // create the chart let chart = new ScatterPlot('#chart', { data: horseData, width: 960 }); });

And here is the whole thing, complete with a call to our Azure Function returning the data from Cosmos DB. It’s a TON of data, so be patient while we chew up all your bandwidth.

See the Pen GYvGep by Burke Holland (@burkeholland) on CodePen.

If you made it this far, I...well, I’m impressed. D3 is not an easy thing to get into. It simply doesn’t look like it’s going to be any fun. BUT, no thumbs were smashed here, and we now have complete control of this chart. We can do anything we like with it.

Check out some of these additional resources for D3, and good luck with your chart. You can do it! Or you can’t. Either way, someone has to make a chart, and it might as well be you.

For your data and API: More on D3:

The post Hand roll charts with D3 like you actually know what you’re doing appeared first on CSS-Tricks.

How to stop using console.log() and start using your browser’s debugger

Css Tricks - Tue, 10/23/2018 - 10:56am

Whenever I see someone really effectively debug JavaScript in the browser, they use the DevTools tooling to do it. Setting breakpoints and hopping over them and such. That, as opposed to sprinkling console.log() (and friends) statements all around your code.

Parag Zaveri wrote about the transition and it has clearly resonated with lots of folks! (7.5k claps on Medium as I write).

I know I have hangups about it...

  • Part of debugging is not just inspecting code once as-is; it's inspecting stuff, making changes and then continuing to debug. If I spend a bunch of time setting up breakpoints, will they still be there after I've changed my code and refreshed? Answer: DevTools appears to do a pretty good job with that.
  • Looking at the console to see some output is one thing, but mucking about in the Sources panel is another. My code there might be transpiled, combined, and not quite look like my authored code, making things harder to find. Plus it's a bit cramped in there, visually.

But yet! It's so powerful. Setting a breakpoint (just by clicking a line number) means that I don't have to litter my own code with extra junk, nor do I have to choose what to log. Every variable in local and global scope is available for me to look at that breakpoint. I learned in Parag's article that you might not even need to manually set breakpoints. You can, for example, have it break whenever a click (or other) event fires. Plus, you can type in variable names you specifically want to watch for, so you don't have to dig around looking for them. I'll be trying to use the proper DevTools for debugging more often and seeing how it goes.

While we're talking about debugging though... I've had this in my head for a few months: Why doesn't JavaScript have log levels? Apparently, this is a very common concept in many other languages. You can write logging statements, but they will only log if the configuration says it should. That way, in development, you can get detailed logging, but log only more serious errors in production. I mention it because it could be nice to leave useful logging statements in the code, but not have them actually log if you set like console.level = "production"; or whatever. Or perhaps they could be compiled out during a build step.

Direct Link to ArticlePermalink

The post How to stop using console.log() and start using your browser’s debugger appeared first on CSS-Tricks.

Use Cases for Flexbox

Css Tricks - Tue, 10/23/2018 - 7:46am

I remember when I first started to work with flexbox that the world looked like flexible boxes to me. It's not that I forgot how floats, inline-block, or any other layout mechanisms work, I just found myself reaching for flexbox by default.

Now that grid is here and I find myself working on projects where I can use it freely, I find myself reaching for grid by default for the most part. But it's not that I forgot how flexbox works or feel that grid supersedes flexbox — it's just that darn useful. Rachel puts is very well:

Asking whether your design should use Grid or Flexbox is a bit like asking if your design should use font-size or color. You should probably use both, as required. And, no-one is going to come to chase you if you use the wrong one.

Yes, they can both lay out some boxes, but they are different in nature and are designed for different use cases. Wrapping un-even length elements is a big one, but Rachel goes into a bunch of different use cases in this article.

Direct Link to ArticlePermalink

The post Use Cases for Flexbox appeared first on CSS-Tricks.

Durable Functions: Fan Out Fan In Patterns

Css Tricks - Tue, 10/23/2018 - 4:09am

This post is a collaboration between myself and my awesome coworker, Maxime Rouiller.

Durable Functions? Wat. If you’re new to Durable, I suggest you start here with this post that covers all the essentials so that you can properly dive in. In this post, we’re going to dive into one particular use case so that you can see a Durable Function pattern at work!

Today, let’s talk about the Fan Out, Fan In pattern. We’ll do so by retrieving an open issue count from GitHub and then storing what we get. Here’s the repo where all the code lives that we’ll walk through in this post.

View Repo

About the Fan Out/Fan In Pattern

We briefly mentioned this pattern in the previous article, so let’s review. You’d likely reach for this pattern when you need to execute multiple functions in parallel and then perform some other task with those results. You can imagine that this pattern is useful for quite a lot of projects, because it’s pretty often that we have to do one thing based on data from a few other sources.

For example, let’s say you are a takeout restaurant with a ton of orders coming through. You might use this pattern to first get the order, then use that order to figure out prices for all the items, the availability of those items, and see if any of them have any sales or deals. Perhaps the sales/deals are not hosted in the same place as your prices because they are controlled by an outside sales firm. You might also need to find out what your delivery queue is like and who on your staff should get it based on their location.

That’s a lot of coordination! But you’d need to then aggregate all of that information to complete the order and process it. This is a simplified, contrived example of course, but you can see how useful it is to work on a few things concurrently so that they can then be used by one final function.

Here’s what that looks like, in abstract code and visualization

See the Pen Durable Functions: Pattern #2, Fan Out, Fan In by Sarah Drasner (@sdras) on CodePen.

const df = require('durable-functions') module.exports = df(function*(ctx) { const tasks = [] // items to process concurrently, added to an array const taskItems = yield ctx.df.callActivityAsync('fn1') taskItems.forEach(item => tasks.push(ctx.df.callActivityAsync('fn2', item)) yield ctx.df.task.all(tasks) // send results to last function for processing yield ctx.df.callActivityAsync('fn3', tasks) })

Now that we see why we would want to use this pattern, let’s dive in to a simplified example that explains how.

Setting up your environment to work with Durable Functions

First things first. We've got to get development environment ready to work with Durable Functions. Let's break that down.

GitHub Personal Access Token

To run this sample, you’ll need to create a personal access token in GitHub. If you go under your account photo, open the dropdown, and select Settings, then Developer settings in the left sidebar. In the same sidebar on the next screen, click Personal access tokens option.

Then a prompt will come up and you can click the Generate new token button. You should give your token a name that makes sense for this project. Like “Durable functions are better than burritos.” You know, something standard like that.

For the scopes/permission option, I suggest selecting "repos" which then allows to click the Generate token button and copy the token to your clipboard. Please keep in mind that you should never commit your token. (It will be revoked if you do. Ask me why I know that.) If you need more info on creating tokens, there are further instructions here.

Functions CLI

First, we’ll install the latest version of the Azure Functions CLI. We can do so by running this in our terminal:

npm i -g azure-functions-core-tools@core --unsafe-perm true

Does the unsafe perm flag freak you out? It did for me as well. Really what it’s doing is preventing UID/GID switching when package scripts run, which is necessary because the package itself is a JavaScript wrapper around .NET. Brew installing without such a flag is also available and more information about that is here.

Optional: Setting up the project in VS Code

Totally not necessary, but I like working in VS Code with Azure Functions because it has great local debugging, which is typically a pain with Serverless functions. If you haven’t already installed it, you can do so here:

Set up a Free Trial for Azure and Create a Storage Account

To run this sample, you’ll need to test drive a free trial for Azure. You can go into the portal and sign in the lefthand corner. You'll make a new Blob Storage account, and retrieve the keys. Since we have that all squared away, we’re ready to rock!

Setting up Our Durable Function

Let’s take a look at the repo we have set up. We’ll clone or fork it:

git clone https://github.com/Azure-Samples/durablefunctions-apiscraping-nodejs.git

Here’s what that initial file structure is like.

(This visualization was made from my CLI tool.)

In local.settings.json, change GitHubToken to the value you grabbed from GitHub earlier, and do the same for the two storage keys — paste in the keys from the storage account you set up earlier.

Then run:

func extensions install npm i func host start

And now we’re running locally!

Understanding the Orchestrator

As you can see, we have a number of folders within the FanOutFanInCrawler directory. The functions in the directories listed GetAllRepositoriesForOrganization, GetAllOpenedIssues, and SaveRepositories are the functions that we will be coordinating.

Here’s what we’ll be doing:

  • The Orchestrator will kick off the GetAllRepositoriesForOrganization function, where we’ll pass in the organization name, retrieved from getInput() from the Orchestrator_HttpStart function
  • Since this is likely to be more than one repo, we’ll first create an empty array, then loop through all of the repos and run GetOpenedIssues, and push those onto the array. What we’re running here will all fire concurrently because it isn’t within the yield in the iterator
  • Then we’ll wait for all of the tasks to finish executing and finally call SaveRepositories which will store all of the results in Blob Storage

Since the other functions are fairly standard, let’s dig into that Orchestrator for a minute. If we look inside the Orchestrator directory, we can see it has a fairly traditional setup for a function with index.js and function.json files.

Generators

Before we dive into the Orchestrator, let’s take a very brief side tour into generators, because you won’t be able to understand the rest of the code without them.

A generator is not the only way to write this code! It could be accomplished with other asynchronous JavaScript patterns as well. It just so happens that this is a pretty clean and legible way to write it, so let’s look at it really fast.

function* generator(i) { yield i++; yield i++; yield i++; } var gen = generator(1); console.log(gen.next().value); // 1 console.log(gen.next().value); // 2 console.log(gen.next().value); // 3 console.log(gen.next()); // {value: undefined, done: true}

After the initial little asterisk following function*, you can begin to use the yield keyword. Calling a generator function does not execute the whole function in its entirety; an iterator object is returned instead. The next() method will walk over them one by one, and we’ll be given an object that tells us both the value and done — which will be a boolean of whether we’re done walking through all of the yield statements. You can see in the example above that for the last .next() call, an object is returned where done is true, letting us know we’ve iterated through all values.

Orchestrator code

We’ll start with the require statement we’ll need for this to work:

const df = require('durable-functions') module.exports = df(function*(context) { // our orchestrator code will go here })

It's worth noting that the asterisk there will create an iterator function.

First, we’ll get the organization name from the Orchestrator_HttpStart function and get all the repos for that organization with GetAllRepositoriesForOrganization. Note we use yield within the repositories assignment to make the function perform in sequential order.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) })

Then we’re going to create an empty array named output, create a for loop from the array we got containing all of the organization's repos, and use that to push the issues into the array. Note that we don’t use yield here so that they’re all running concurrently instead of waiting one after another.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } })

Finally, when all of these executions are done, we’re going to store the results and pass that in to the SaveRepositories function, which will save them to Blob Storage. Then we’ll return the unique ID of the instance (context.instanceId).

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } const results = yield context.df.Task.all(output) yield context.df.callActivityAsync('SaveRepositories', results) return context.instanceId })

Now we’ve got all the steps we need to manage all of our functions with this single orchestrator!

Deploy

Now the fun part. Let’s deploy! &#x1f680;

To deploy components, Azure requires you to install the Azure CLI and login with it.

First, you will need to provision the service. Look into the provision.ps1 file that's provided to familiarize yourself with the resources we are going to create. Then, you can execute the file with the previously generated GitHub token like this:

.\provision.ps1 -githubToken <TOKEN> -resourceGroup <ResourceGroupName> -storageName <StorageAccountName> -functionName <FunctionName>

If you don’t want to install PowerShell, you can also take the commands within provision.ps1 and run it manually.

And there we have it! Our Durable Function is up and running.

The post Durable Functions: Fan Out Fan In Patterns appeared first on CSS-Tricks.

Understanding the difference between grid-template and grid-auto

Css Tricks - Mon, 10/22/2018 - 11:16am

Ire Aderinokun:

Within a grid container, there are grid cells. Any cell positioned and sized using the grid-template-* properties forms part of the explicit grid. Any grid cell that is not positioned/sized using this property forms part of the implicit grid instead.

Understanding explicit grids and implicit grids is powerful. This is my quicky take:

  • Explicit: you define a grid and place items exactly where you want them to go.
  • Implicit: you define a grid and let items fall into it as they can.

Grids can be both!

Direct Link to ArticlePermalink

The post Understanding the difference between grid-template and grid-auto appeared first on CSS-Tricks.

Hard Costs of Third-Party Scripts

Css Tricks - Mon, 10/22/2018 - 11:15am

Dave Rupert:

Every client I have averages ~30 third-party scripts but discussions about reducing them with stakeholders end in “What if we load them all async?” This is a good rebuttal because there are right and wrong ways to load third-party scripts, but there is still a cost, a cost that’s passed on to the user. And that’s what I want to investigate.

Yes, performance is a major concern. But it's not just the loading time and final weight of those scripts, there are all sorts of concerns. Dave lists privacy, render blocking, fighting for CPU time, fighting for network connection threads, data and battery costs, and more.

Dave's partner Trent Walton is also deep into thinking about third-party scripts, which he talked about a bit on the latest ShopTalk Show.

Check out Paolo Mioni's investigation of a single script and the nefarious things it can do.

Direct Link to ArticlePermalink

The post Hard Costs of Third-Party Scripts appeared first on CSS-Tricks.

Building Skeleton Components with React

Css Tricks - Mon, 10/22/2018 - 4:09am

One of the advantages of building a Single Page Application (SPA) is the way navigating between pages is extremely fast. Unfortunately, the data of our components is sometimes only available after we have navigated to a specific part of our application. We can level up the user’s perceived performance by breaking the component into two pieces: the container (which displays a skeleton view when it’s empty) and the content. If we delay the rendering of the content component until we have actually received the content required, then we can leverage the skeleton view of the container thus boosting the perceived load time!

Let’s get started in creating our components.

What we’re making

We will be leveraging the skeleton component that was built in the article, “Building Skeleton Screens with CSS Custom Properties.”

This is a great article that outlines how you can create a skeleton component, and the use of the :empty selector allows us to cleverly use {this.props.children} inside of our components so that the skeleton card is rendered whenever the content is unavailable.

See the Pen React 16 -- Skeleton Card - Final by Mathias Rechtzigel (@MathiasaurusRex) on CodePen.

Creating our components

We’re going to create a couple of components to help get us started.

  1. The outside container (CardContainer)
  2. The inside content (CardContent)

First, let’s create our CardContainer. This container component will leveraging the :empty pseudo selector so it will render the skeleton view whenever this component doesn’t receive a child.

class CardContainer extends React.Component { render() { return ( <div className="card"> {this.props.children} </div> ); } }

Next, let’s create our CardContent component, which will be nested inside of our CardContainer component.

class CardContent extends React.Component { render() { return ( <div className="card--content"> <div className="card-content--top"> <div className="card-avatar"> <img className="card-avatar--image" src={this.props.avatarImage} alt="" /> <span>{this.props.avatarName}</span> </div> </div> <div className="card-content--bottom"> <div className="card-copy"> <h1 className="card-copy--title">{this.props.cardTitle}</h1> <p className="card-copy--description">{this.props.cardDescription}</p> </div> <div className="card--info"> <span className="card-icon"> <span className="sr-only">Total views: </span> {this.props.countViews} </span> <span className="card-icon"> <span className="sr-only">Total comments: </span> {this.props.countComments} </span> </div> </div> </div> ); } }

As you can see, there’s a couple of spaces for properties that can be accepted, such as an avatar image and name and the content of the card that is visible.

Putting the components together allows us to create a full card component.

<CardContainer> <CardContent avatarImage='path/to/avatar.jpg' avatarName='FirstName LastName' cardTitle='Title of card' cardDescription='Description of card' countComments='XX' countViews='XX' /> </CardContainer>

See the Pen React 16 -- Skeleton Card - Card Content No State by Mathias Rechtzigel (@MathiasaurusRex) on CodePen.

Using a ternary operator to reveal contents when the state has been loaded

Now that we have both a CardContainer and CardContent component, we have split our card into the necessary pieces to create a skeleton component. But how do we swap between the two when content has been loaded?

This is where a clever use of state and ternary operators comes to the rescue!

We’re going to do three things in this section:

  1. Create a state object that is initially set to false
  2. Update our component to use a ternary operator so that the cardContent component will not be rendered when the state is false
  3. Set the state to be the content of our object once we receive that information

We want to set the default state of our content to be set to false. This hides the card content and allows the CSS :empty selector to do it’s magic.

this.state = { cardContent: false };

Now we’re got to update our CardContainer children to include a ternary operator. In our case, it looks at this.state.cardContent to see whether or not it resolves to true or false. If it’s true, it does everything on the left side of the colon (:). Conversely, if it’s false, it does everything on the right hand of the colon. This is pretty useful because objects will resolve to true and if we set the initial state to false, then our component has all the conditions it needs to implement a skeleton component!

Let’s combine everything together inside of our main application. We wont worry about the state inside CardContent quite yet. We’ll bind that to a button to mimic the process of fetching content from an API.

<CardContainer> {this.state.cardContent ? <CardContent avatarImage={this.state.cardContent.card.avatarImage} avatarName={this.state.cardContent.card.avatarName} cardTitle={this.state.cardContent.card.cardTitle} cardDescription={this.state.cardContent.card.cardDescription} countComments={this.state.cardContent.card.countComments} countViews={this.state.cardContent.card.countViews}/> : null } </CardContainer>

Boom! As you can see, the card is rendering as the skeleton component since the state of cardContent is set to false. Next, we’re going to create a function that sets the state of cardContent to a mock Card Data Object (dummyCardData):

populateCardContent = (event) => { const dummyCardData = { card: { avatarImage: "https://gravatar.com/avatar/f382340e55fa164f1e3aef2739919078?s=80&d=https://codepen.io/assets/avatars/user-avatar-80x80-bdcd44a3bfb9a5fd01eb8b86f9e033fa1a9897c3a15b33adfc2649a002dab1b6.png", avatarName: "Mathias Rechtzigel", cardTitle: "Minneapolis", cardDescription:"Winter is coming, and it will never leave", countComments:"52", countViews:"32" } } const cardContent = dummyCardData this.setState({ cardContent }) }

In this example, we’re setting the state inside of a function. We could also leverage React’s lifecycle methods to populate the component’s state. We would have to take a look at the appropriate method to use, depending on our requirements. For example, if I’m loading an individual component and want to get the content from the API, then we would use the ComponentDidMount lifecycle method. As the documentation states, we have to be careful of using this lifecycle method in this way as it could cause an additional render — but setting the initial state to false should prevent that from happening.

See the Pen React 16 -- Skeleton Card - Final by Mathias Rechtzigel (@MathiasaurusRex) on CodePen.

The second card in the list is hooked up to the click event that sets the cardContent state. Once the state is set to the content’s object, the skeleton version of the card disappears and the content is shown, ensuring the that the user doesn’t see a flash of UI (FLU season is coming so we don’t want to give the users the F.L.U.!).

Let’s review

We covered quite a bit, so let’s recap what we did.

  1. We created a CardContainer. The container component is leveraging the :empty pseudo selector so that it renders the skeleton view of the component when it is empty.
  2. We created the CardContent component that is nested within CardContainer that we pass our state to.
  3. We set the default state of the cardContent to false
  4. We use a ternary operator to render the inner content component only when we receive the content and put it in our cardContent state object.

And there we have it! A perceived boost in performance by creating an interstitial state between the UI being rendered and it receiving the data to populate content.

The post Building Skeleton Components with React appeared first on CSS-Tricks.

8 Tips for Great Code Reviews

Css Tricks - Fri, 10/19/2018 - 7:45am

Kelly Sutton with good advice on code reviews. Hard to pick a favorite. I like all the stuff about minding your tone and getting everyone involved, but I also think the computerization stuff is important:

If a computer can decide and enforce a rule, let the computer do it. Arguing spaces vs. tabs is not a productive use of human time.

Re: Tip #6: it's pretty cool when the tools you use can help with that, like this new GitHub feature where code suggestions can turn into a commit.

Direct Link to ArticlePermalink

The post 8 Tips for Great Code Reviews appeared first on CSS-Tricks.

Why Do You Use Frameworks?

Css Tricks - Fri, 10/19/2018 - 7:30am

Nicole Sullivan asked. People said:

  • &#x1f426;... for the same reason that I buy ingredients rather than growing/raising all of my own food.
  • &#x1f426; I write too many bugs without them.
  • &#x1f426; Avoiding bikeshedding.
  • &#x1f426; ... to solve problems that are adjacent to, but distinct from, the problem I'm trying to solve at hand.
  • &#x1f426; Because to create the same functionality would require a much larger team
  • &#x1f426; I want to be able to focus on building the product rather than the tools.
  • &#x1f426; it’s easier to pick a framework and point to docs than teach and document your own solution.
  • &#x1f426; faster development
  • &#x1f426; They have typically solved the problems and in a better way than my first version or even fifth version will be.

There are tons more replies. Jeremy notes "exactly zero mention end users." I said: Sometimes I just wanna be told what to do.

Nicole stubbed out the responses:

Why do you use frameworks? Almost 100 of you answered. Here are the results. pic.twitter.com/jdcTpA0kf5

— Nicole Sullivan &#x1f48e; (@stubbornella) October 16, 2018

If you can't get enough of the answers here, Rachel asked the same thing a few days later, this time scoped to CSS frameworks.

The post Why Do You Use Frameworks? appeared first on CSS-Tricks.

Using Feature Detection, Conditionals, and Groups with Selectors

Css Tricks - Fri, 10/19/2018 - 4:18am

CSS is designed in a way that allows for relatively seamless addition of new features. Since the dawn of the language, specifications have required browsers to gracefully ignore any properties, values, selectors, or at-rules they do not support. Consequently, in most cases, it is possible to successfully use a newer technology without causing any issues in older browsers.

Consider the relatively new caret-color property (it changes the color of the cursor in inputs). Its support is still low but that does not mean that we should not use it today.

.myInput { color: blue; caret-color: red; }

Notice how we put it right next to color, a property with practically universal browser support; one that will be applied everywhere. In this case, we have not explicitly discriminated between modern and older browsers. Instead, we just rely on the older ones ignoring features they do not support.

It turns out that this pattern is powerful enough in the vast majority of situations.

When feature detection is necessary

In some cases, however, we would really like to use a modern property or property value whose use differs significantly from its fallback. In those cases, @supports comes to the rescue.

@supports is a special at-rule that allows us to conditionally apply any styles in browsers that support a particular property and its value.

@supports (display: grid) { /* Styles for browsers that support grid layout... */ }

It works analogously to @media queries, which also only apply styles conditionally when a certain predicate is met.

To illustrate the use of @supports, consider the following example: we would like to display a user-uploaded avatar in a nice circle but we cannot guarantee that the actual file will be of square dimensions. For that, the object-fit property would be immensely helpful; however, it is not supported by Internet Explorer (IE). What do we do then?

Let us start with markup:

<div class="avatar"> <img class="avatar-image" src="..." alt="..." /> </div>

As a not-so-pretty fallback, we will squeeze the image width within the avatar at the cost that wider files will not completely cover the avatar area. Instead, our single-color background will appear underneath.

.avatar { position: relative; width: 5em; height: 5em; border-radius: 50%; overflow: hidden; background: #cccccc; /* Fallback color */ } .avatar-image { position: absolute; top: 50%; right: 0; bottom: 0; left: 50%; transform: translate(-50%, -50%); max-width: 100%; }

You can see this behavior in action here:

See the Pen Demo fallback for object-fit by Jirka Vebr (@JirkaVebr) on CodePen.

Notice there is one square image, a wide one, and a tall one.

Now, if we use object-fit, we can let the browser decide the best way to position the image, namely whether to stretch the width, height, or neither.

@supports (object-fit: cover) { .avatar-image { /* We no longer need absolute positioning or any transforms */ position: static; transform: none; object-fit: cover; width: 100%; height: 100%; } }

The result, for the same set of image dimensions, works nicely in modern browsers:

See the Pen @supports object-fit demo by Jirka Vebr (@JirkaVebr) on CodePen.

Conditional selector support

Even though the Selectors Level 4 specification is still a Working Draft, some of the selectors it defines — such as :placeholder-shown — are already supported by many browsers. Should this trend continue (and should the draft retain most of its current proposals), this level of the specification will introduce more new selectors than any of its predecessors. In the meantime, and also while IE is still alive, CSS developers will have to target a yet more diverse and volatile spectrum of browsers with nascent support for these selectors.

It will be very useful to perform feature detection on selectors. Unfortunately, @supports is only designed for testing support of properties and their values, and even the newest draft of its specification does not appear to change that. Ever since its inception, it has, however, defined a special production rule in its grammar whose sole purpose is to provide room for potential backwards-compatible extensions, and thus it is perfectly feasible for a future version to add the ability to condition on support for particular selectors. Nevertheless, that eventuality remains entirely hypothetical.

Selector counterpart to @supports

First of all, it is important to emphasize that, analogous to the aforementioned caret-color example where @supports is probably not necessary, many selectors do not need to be explicitly tested for either. For instance, we might simply try to match ::selection and not worry about browsers that do not support it since it will not be the end of the world if the selection appearance remains the browser default.

Nevertheless, there are cases where explicit feature detection for selectors would be highly desirable. In the rest of this article, we will introduce a pattern for addressing such needs and subsequently use it with :placeholder-shown to build a CSS-only alternative to the Material Design text field with a floating label.

Fundamental property groups of selectors

In order to avoid duplication, it is possible to condense several identical declarations into one comma-separated list of selectors, which is referred to as group of selectors.

Thus we can turn:

.foo { color: red } .bar { color: red }

...into:

.foo, .bar { color: red }

However, as the Selectors Level 3 specification warns, these are only equivalent because all of the selectors involved are valid. As per the specification, if any of the selectors in the group is invalid, the entire group is ignored. Consequently, the selectors:

..foo { color: red } /* Note the extra dot */ .bar { color: red }

...could not be safely grouped, as the former selector is invalid. If we grouped them, we would cause the browser to ignore the declaration for the latter as well.

It is worth pointing out that, as far as a browser is concerned, there is no difference between an invalid selector and a selector that is only valid as per a newer version of the specification, or one that the browser does not know. To the browser, both are simply invalid.

We can take advantage of this property to test for support of a particular selector. All we need is a selector that we can guarantee matches nothing. In our examples, we will use :not(*).

.foo { color: red } :not(*):placeholder-shown, .foo { color: green }

Let us break down what is happening here. An older browser will successfully apply the first rule, but when processing the the rest, it will find the first selector in the group invalid since it does not know :placeholder-shown, and thus it will ignore the entire selector group. Consequently, all elements matching .foo will remain red. In contrast, while a newer browser will likely roll its robot eyes upon encountering :not(*) (which never matches anything) it will not discard the entire selector group. Instead, it will override the previous rule, and thus all elements matching .foo will be green.

Notice the similarity to @supports (or any @media query, for that matter) in terms of how it is used. We first specify the fallback and then override it for browsers that satisfy a predicate, which in this case is the support for a particular selector — albeit written in a somewhat convoluted fashion.

See the Pen @supports for selectors by Jirka Vebr (@JirkaVebr) on CodePen.

Real-world example

We can use this technique for our input with a floating label to separate browsers that do from those that do not support :placeholder-shown, a pseudo-class that is absolutely vital to this example. For the sake of relative simplicity, in spite of best UI practices, we will choose our fallback to be only the actual placeholder.

Let us start with markup:

<div class="input"> <input class="input-control" type="email" name="email" placeholder="Email" id="email" required /> <label class="input-label" for="email">Email</label> </div>

As before, the key is to first add styles for older browsers. We hide the label and set the color of the placeholder.

.input { height: 3.2em; position: relative; display: flex; align-items: center; font-size: 1em; } .input-control { flex: 1; z-index: 2; /* So that it is always "above" the label */ border: none; padding: 0 0 0 1em; background: transparent; position: relative; } .input-label { position: absolute; top: 50%; right: 0; bottom: 0; left: 1em; /* Align this with the control's padding */ z-index: 1; display: none; /* Hide this for old browsers */ transform-origin: top left; text-align: left; }

For modern browsers, we can effectively disable the placeholder by setting its color to transparent. We can also align the input and the label relative to one other for when the placeholder is shown. To that end, we can also utilize the sibling selector in order to style the label with respect to the state of the input.

.input-control:placeholder-shown::placeholder { color: transparent; } .input-control:placeholder-shown ~ .input-label { transform: translateY(-50%) } .input-control:placeholder-shown { transform: translateY(0); }

Finally, the trick! Exactly like above, we override the styles for the label and the input for modern browsers and the state where the placeholder is not shown. That involves moving the label out of the way and shrinking it a little.

:not(*):placeholder-shown, .input-label { display: block; transform: translateY(-70%) scale(.7); } :not(*):placeholder-shown, .input-control { transform: translateY(35%); }

With all the pieces together, as well as more styles and configuration options that are orthogonal to this example, you can see the full demo:

See the Pen CSS-only @supports for selectors demo by Jirka Vebr (@JirkaVebr) on CodePen.

Reliability and limitations of this technique

Fundamentally, this technique requires a selector that matches nothing. To that end, we have been using :not(*); however, its support is also limited. The universal selector * is supported even by IE 7, whereas the :not pseudo-class has only been implemented since IE 9, which is thus the oldest browser in which this approach works. Older browsers would reject our selector groups for the wrong reason — they do not support :not! Alternatively, we could use a class selector such as .foo or a type selector such as foo, thereby supporting even the most ancient browsers. Nevertheless, these make the code less readable as they do not convey that they should never match anything, and thus for most modern sites, :not(*) seems like the best option.

As for whether the property of groups of selectors that we have been taking advantage of also holds in older browsers, the behavior is illustrated in an example as a part of the CSS 1 section on forward-compatible parsing. Furthermore, the CSS 2.1 specification then explicitly mandates this behavior. To put the age of this specification in perspective, this is the one that introduced :hover. In short, while this technique has not been extensively tested in the oldest or most obscure browsers, its support should be extremely wide.

Lastly, there is one small caveat for Sass users (Sass, not SCSS): upon encountering the :not(*):placeholder-shown selector, the compiler gets fooled by the leading colon, attempts to parse it as a property, and when encountering the error, it advises the developer to escape the selector as so: \:not(*):placeholder-shown, which does not look very pleasant. A better workaround is perhaps to replace the backslash with yet another universal selector to obtain *:not(*):placeholder-shown since, as per the specification, it is implied anyway in this case.

The post Using Feature Detection, Conditionals, and Groups with Selectors appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.