Front End Web Development

6 Months of Working Remotely Taught Me a Thing or Ten

Css Tricks - Fri, 09/08/2017 - 1:08pm

Peter Anglea writes up his key takeaways after six months on the job with a new front-end position. His points ring true to me as a remote worker and the funny thing is that each one of the suggestions is actually applicable to anyone in almost any front-end job, whether it happens to be in-house or remote.

The full post is worth reading, though the list breaks down to:

  1. Be as available as possible
  2. Communicate clearly
  3. Go out of your way to be human
  4. Offer praise and positive sentiments early and often
  5. Create a comfortable space conducive to productivity
  6. Put your pants on
  7. Go outside
  8. Turn on your camera
  9. Work on more than one project at a time
  10. Take advantage of the perks… and be responsible

One item I would add to the list is to manage up in day-to-day conversations. In other words, give frequent and regular updates with examples of progress so that your client/boss/whomever has no doubt that you are being productive from afar. I suppose that goes along with "communicate clearly" but takes it one step further.

Direct Link to ArticlePermalink

6 Months of Working Remotely Taught Me a Thing or Ten is a post from CSS-Tricks

The average web page is 3MB. How much should we care?

Css Tricks - Fri, 09/08/2017 - 1:07pm

Tammy Everts with a deep dive into the average page size, which seems to grow year in and year out.

It's a little perplexing that the average page size trends up each year as performance has become a growing concern on the forefront of our minds, but Tammy has keen insights that are worth reading because she suggests that user experience isn't always about page size and that bloat is far from the only metric we should be concerned.

Correlating page size with user experience is like presenting someone with an entire buffet dinner and assuming that it represents what they actually ate. To properly measure user experience, we need to focus on the content – such as the navbar or hero product image – that users actually want to consume. The best performance metric for measuring user experience is one that measures how long the user waits before seeing this critical content.

Spot on. There is such a thing as making intentional use of file size, depending on whether or goal is super fast load times or communicating an idea. Not that the two are mutually exclusive, but the trade-off can certainly exist.

Direct Link to ArticlePermalink

The average web page is 3MB. How much should we care? is a post from CSS-Tricks

Screen Readers and CSS: Are We Going Out of Style (and into Content)?

Css Tricks - Thu, 09/07/2017 - 2:06pm

The big takeaway in this post is that screen readers do not always read content the way it is styled in CSS. Toss in the fact that not all screen readers speak or read markup the same way and that there are differences in how they render content in different browsers and the results become...well different. Kind of like cross-browser testing CSS, but with speech.

The key points:

  • Different screen reader/browser pairings behave differently
  • DOM order is everything
  • Containers are only visual

That first point is crucial. For example, beware of using <sup> to style prices in place of using a proper decimal point between dollars and cents because some screen readers will read that as a whole number. Wait, you mean, the price is $12.99 and not $1,299? Phew. &#x1f605;

Direct Link to ArticlePermalink

Screen Readers and CSS: Are We Going Out of Style (and into Content)? is a post from CSS-Tricks

Upgrade Your JavaScript Error Monitoring

Css Tricks - Thu, 09/07/2017 - 3:39am

(This is a sponsored post.)

Automatically detect and diagnose JavaScript errors impacting your users with Bugsnag. Get comprehensive diagnostic reports, know immediately which errors are worth fixing, and debug in a fraction of the time compared to traditional tools.

Bugsnag detects every single error and prioritizes errors with the greatest impact on your users. Get support for 50+ platforms and integrate with the development and productivity tools your team already uses.

Bugsnag is used by the world's top engineering teams including Airbnb, Pandora, MailChimp, Square, Shopify, Yelp, Lyft, Docker, and Cisco. Start your free trial today.

Direct Link to ArticlePermalink

Upgrade Your JavaScript Error Monitoring is a post from CSS-Tricks

So You Want To Be a Senior Developer?

Css Tricks - Thu, 09/07/2017 - 3:39am

Let me start with a classic caveat: I cannot bestow upon you the title of senior developer. I have no special insight into how companies these days are hiring and promoting people to senior developer roles.

What I can tell you is what qualities I think would make for a heck of a senior developer. I can tell you how I think about the distinction between senior developers and those who aren't quite there yet. Should I, one day, be in charge of a legion of developers where it was my call what level they were at, this is what I would think about.

A senior front end developer has experience.

There is no way around this one. You aren't going to roll into your first job a senior developer.

You probably won't roll into any new job a senior developer. Even if I was pretty sure a person was going to be a senior developer and previously has been, I'd probably wouldn't start them there, just because there is no guarantee they can be just as effective in a completely new environment. Even if the tech is the same, the people aren't.

A senior front-end developer has a track record of good judgment.

Development isn't only about writing code, it's about making choices. Good choices are researched, discussed, and influenced by instinct and experience. When you make a choice, you are demonstrating your judgement to everyone watching. If you make it clear that your judgement is informed, ethical, and as transparent as it can be, that's a good thing. If you do that over and over, that makes you senior.

A senior developer has positive impact beyond the code.

If the only thing you contribute to a team is coding chops, you probably aren't a particularly good candidate for a senior developer. Code isn't written in a bubble. Good code, anyway. Good code is a reflection of a team, a product of a cohesive vision, and a foundation for an organizations goals. To participiate in good code, and demonstrate your ability to be a senior developer, you don't isolate yourself, you sit at the middle of the table (metaphorically).

Soft skills are no joke. A senior developer can write clear emails, rope people in around ideas, lead meetings, and just clean the damn soup exlosion in the microwave before it turns into a productivity-draining war of passive agressive post it notes (metaphorically).

A senior developer is helpful, not all-knowing.

Say a co-worker comes up to you and asks you something, and you have no idea what the answer is. Does that mean you aren't ready to be a senior developer? Absolutely not. It's all about how you answer that question that makes you senior or not. Can you help suss out why they are asking to get more context and be more broadly helpful? Can you offer to help find the answer together? Will you do some research afterward and follow up with them?

Even if you do know the exact answer, just delivering it and spinning your chair back around to continue typing is a worse answer than digging into the question together with your co-worker.

Being a senior developer doesn't mean you have to know everything, it means you can help find out anything.

A senior front-end developer is a force multiplier.

This is my favorite one by far.

There are developers who are, on paper at least, multiple times as effective as others. Twice the commits, twice the lines of code written, twice the bugs closed. You can absolutely aspire to be a developer like that, but it doesn't automatically give you the senior card.

The best (and most senior) developer on a team is the one who multiplies the effectiveness of their fellow developers. Perhaps that amazing developer on your team is able to be that way because someone else is freeing up their day to make that possible. Because someone else has created a rock solid dev enviornment that fosters productivity. Because someone else taught them and gave them the keys to be that way.

A developer who is a force multiplier for the entire team is absolutely a senior developer.

I can't promise that doing these things will make you a senior developer.

I have no power to tell the chain of command at your office to think about these things and factor them in to their decision making. I can say that this would be my advice to them, should they be seeking it, on how to promote developers.

Are you in the position to promote developers? Have you? Share your thinking with us!

So You Want To Be a Senior Developer? is a post from CSS-Tricks

For the love of God, please tell me what your company does

Css Tricks - Wed, 09/06/2017 - 1:11pm

Kasper Kubica goes on a humorous rant about the way companies describe themselves on their websites:

More and more often, upon discovering a new company or product, I visit their website hoping to find out what it is they do, but instead get fed a mash of buzzwords about their “team” and “values”. And this isn’t a side dish?—?this is the main entrée of these sites, with a coherent explanation of the company’s products or services rarely occupying more than a footnote on the menu.

While many of the examples and points are funny at their core, there's clearly a level of frustration laced between the lines and it's easy to understand why:

At this point, I’ve given up. I’m back to Google, back to searching ... because even though I came to [the site] knowing exactly what I wanted, I have no idea what they offer.

While this isn't so much about front-end development, it is a good reminder about content's role in usability and user experience. We can have the cleanest, performant and accessible code ever committed but the site still has to communicate something and do it well for it to be useful to the end user.

Direct Link to ArticlePermalink

For the love of God, please tell me what your company does is a post from CSS-Tricks

Now in Early Access: Visual search on Typekit

Nice Web Type - Wed, 09/06/2017 - 5:00am

Today we’re rolling out a whole new way to search for fonts visually on Typekit. The first step is one many of you have already mastered: Look for neat type in the world around you.

Our new visual search feature allows you to upload a picture of type—photos of signage or posters, flat artwork, any image file that contains a line of text—and see a list of all the fonts in our inventory that are visually similar to it. We’re launching this as an Early Access feature for now, and we’d love for you to try it out and let us know what you think.

Once you flip on Early Access in your account settings, you’ll see a camera icon in the search field, which toggles open a file selection prompt. You can also drag and drop image files from your desktop onto any Typekit page to start a visual search.

You’ll also see a new Discover section on Typekit.com, which is a quick stop for all kinds of typographic inspiration: visual search, foundries, curated lists, and more.

Getting started with a visual search on Typekit

So let’s say you’ve found some nifty type or lettering on a sign out in the world, and now you’d like to find fonts that are similar to it.

Photo credit: Maria Freyenbacher on Unsplash.com.

Snap a photo of the sign. To start a visual search, turn on Early Access, then use the camera icon in the search bar to select a photo from your hard drive, or (if you’re on your desktop) drag the photo into your browser window.

First, we’ll ask you to select the region of the photo you want to scan for type. We’ll try to select one for you automatically, but you can also move or resize the box to tell us precisely which text to search on.

Next, we’ll try to recognize the text in the sample you uploaded. If we got it right, you can move on to the next step, or else you can update the text to correct it.

Then, finally, we’ll show you a list of similar fonts from Typekit’s inventory.

We don’t hang on to the photos you upload, so be sure to add any fonts you like to your Favorites or take a screenshot of the results page. Some styles will definitely have more relevant results than others, but we’re improving the engine all the time.

So what can’t it do?

At the moment, visual search has some limitations. Toggle on the Tips while working with photos to see a brief guide. Generally, you’ll get best results if you use these guidelines.

  • Type samples that are clear, crisp, and straight on flat backgrounds will yield better, more consistent results. Pictures of text that are skewed, blurry, or low in contrast will sometimes work, but also might confuse the engine and return poorer-quality results.
  • Visual search works with a single line of text. If your image contains multiple lines, use the crop tool to select just one line. We recommend picking a line that has the most distinctive characters, such as Q, R, or lowercase ‘a’.
  • Some letter shapes/styles work better than others. Sans-serif and serif type tend to return better, more consistent results than connected scripts or blackletter, and mixed-case text tends to work better than all-caps or small-caps.
  • Right now we’re only able to recognize and match Latin characters. We are hoping to add support for other scripts in the future.

Above all, this feature should be a lot of fun to try out, and we hope it connects even more people with the type they see around them every day.

Let us know what you think! Spend two minutes on our brief survey, and if you have a whole lot to say feel free to drop an email to support@typekit.com — we’ll be grateful for your feedback.


Working with Schemas in WordPress

Css Tricks - Wed, 09/06/2017 - 3:34am

I polled a group of WordPress developers about schemas the other day and was surprised by the results. Even though almost all of them had heard of schemas and were aware of the potential benefits they provide, very few of them were actually using them on a project.

If you're unfamiliar with schemas, they are HTML attributes that help search engines understand the content structure and know how to display it correctly in search engine results. We've all worked on projects where SEO was a big ol' concern, so schemas can be a key deliverable to help optimize and delivering search performance.

We're going to dig into the concept of schemas a little more in this post and then walk through a real-life application of how to use them in a WordPress environment.

A Schema Overview

Schemas are a vocabulary of HTML attributes and values that describe the content of the document. The concept of this vocabulary was born out of a collaboration between members of Google, Microsoft, Yahoo and Yandex and has since become a project that is maintained by those founding organizations, in addition to members from the W3C and individuals in the community. In fact, you can view the Schema community's activity and connect with the group on their open community page.

You may see the term structured data tossed around when schemas are being discussed and that's because it's a good description for how schemas work. They provide a lexicon and hierarchy in the form of data that add structure and detail to HTML markup. That, in turn, makes the content of an HTML document much easier for search engines to crawl, read, index and interpret. If you see structured data somewhere, then we're really talking about schemas as well.

The Schema Format

Schema can be served in three different formats: Microdata, JSON-LD and RDFa. RDFa is one we aren't going to delve into in this post because Microdata and JSON-LD make up the vast majority of use cases. In fact, as we dive into a working example later in this post, we're going to shift our entire focus on JSON-LD.

Let's illustrate the difference between Microdata and JSON-LD with an example of a business listing website, where visitors can browse information about local businesses. Each business is going to be an item that has additional context, such as a business type, a business name, a description, and hours of operation. We want our search engines to read that data for the sake of being able to render that information cleanly when returning search results. You know, something like this:

Walgreens Pharmacy uses schema to display an address, contact information, operating hours, and even additional site links.

Here's how we would use Microdata to display business hours in a similar way:

<div itemscope itemtype="http://schema.org/Pharmacy"> <h1 itemprop="name">Philippa's Pharmacy</h1> <p itemprop="description"> A superb collection of fine pharmaceuticals. </p> <p>Open: <span itemprop="openingHours" content="Mo,Tu,We,Th 09:00-12:00"> Monday-Thursday 9am-noon </span> </p> </div>

The same can be achieved via JSON-LD:

<script type="application/ld+json"> { "@context": "http://schema.org", "@type": "Pharmacy", "name": "Philippa's Pharmacy", "description": "A superb collection of fine pharmaceuticals.", "openingHours": "Mo,Tu,We,Th 09:00-12:00" } </script> How Schema Impacts SEO

The reason we're talking about schema at all is because we care about how our content is interpreted by search engines, so it's fair to wonder just how much impact schema has on a site's actual search engine ranking and performance.

Google's John Mueller participated in a video chat back in 2015 and gave a very clear indication of how important schemas are becoming in the field of search engine optimization. The fact that the schema project was founded and is maintained by giants in the search engine industry gives us a good idea that, if we want to rank and index well, then we'll consider schema as part of our SEO strategy.

While there may be other sites and posts out there that have better data to back up the importance of schema, the thing we ought to point to is the impact it has on user experience. If someone were to look up "Tom Petty Concert Tickets" in Google and get a list of results back, it's easy to assume that the result with upcoming dates nicely outlined in the results would be the one that stands out the most and is most identifiably useful, even if it is not the first result in the bunch.

Oh nice, one of those results has schema that displays concert dates near me!

Again, this is conjecture and other posts or sites may have data to support the impact that schema has on search result rankings, but having a little bit of influence on the way search engines read and display our content on their pages is a nice affordance for us as front-end developers and we'll take what we can get.

Deciding Which Format to Use

It really comes down to your flavor preference at the end of the day. That said, Google's schema documentation is nearly all centered around JSON-LD so, if you're looking for more potential impact in Google's results, that might be your starting point. Google even has a handy Webmasters tool that generates data in JSON-LD making it perhaps the lowest barrier to entry if you're getting started.

Knowing What Data Can Be Structured

Google's guide to structured data is the most exhaustive and comprehensive resource on the topic and gives the best indication of what data can be structured with examples of how to do it.

The bottom line is that schema wants to categorize content into "types" and these are the types that Google currently recognizes as of this writing:

  • Articles
  • Books
  • Courses
  • Datasets
  • Events
  • Fact Check
  • Job Postings
  • Local Businesses
  • Music
  • Podcasts
  • Products
  • Recipes
  • Reviews
  • TV & Movies
  • Videos

In addition to content type, Google will also look for structured data that serve as UI enhancements to the search results:

  • Breadcrumbs
  • Sitelinks Searchbox
  • Corporate Contact Information
  • Logos
  • Social Profile Links
  • Carousels

You can really start to see the opportunities we have to help influence search results as far as what is displayed and how it is displayed.

Managing Schema in WordPress

Alright, we've spent a good amount of time diving into the concept of schemas and how they can benefit a site's search engine optimization, but how the heck do we work with it? I find the best way to tackle this is with a real-life example, so that's what we're going to do.

In this example, we're using WordPress as our content management system and will put the popular Advanced Custom Fields (ACF) plugin to use. In the end, we will have a way to generate schema for our content on the fly using valid JSON-LD format.

Some readers may be tempted to stop me here and ask why we aren't using the built-in schema management tools of popular WordPress SEO plugins, like Yoast and Schema. There are actually a ton of WordPress plugins that help add structured data to a site and going with any of them is a legitimate option that you ought to consider. In my experience, these plugins do not provide the level of detail I am looking for in projects that require access and control over every content type I need, such as opening hours and contact information for a local business.

That's where ACF comes to my rescue! Not only can we create the exact fields we need to capture the data we want to generate and serve, but we can do it dynamically as part of our everyday content management in WordPress.

Let's use a local business (spoiler alert on the Content Type, am I right?!) website as an example. We're going to create a custom page in WordPress that contains custom fields that allow us to manage the structured data for the business.

Here's what that will look like:

I've put put all the working examples in this post together in a GitHub repo that you can use as a starting point or simply to follow along as we break down the steps to make it happen.

Download on GitHub

Step 1: Create the Custom Options Page

Setting up a custom admin page in WordPress can be done directly in our functions.php file:

// Create a General Options admin page // `options_page` is going to be the name of ACF group we use to set up the fields // We can use that as a conditional statement to create the page against if (function_exists('acf_add_options_page')) { acf_add_options_page(array( 'page_title' => 'General Options', 'menu_title' => 'General Options', 'menu_slug' => 'general-options', 'capability' => 'edit_posts', 'redirect' => false )); }

That snippet gives us a new link in the WordPress navigation called General Options, but only after hooking things up in ACF in the next step. Of course, you can call this whatever you'd like. The point is that we now have a method for creating a page and a way to access it.

Step 2: Create the Custom Fields

Well, our General Options page is useless if there's nothing in it. With Advanced Custom Fields installed and activated, we now need to head over there and set up the fields needed to capture and store our structured data.

Here is how our custom fields will be organized:

  • Company Logo
  • Company Address
  • Hours of Operation
  • Closed Days
  • Contact Information
  • Social Media Links
  • Schema Type

There are a lot of fields here and you can use the acf-export.json file from the GitHub repo to import the fields into ACF rather than manually creating them all yourself. Note that some of the fields are use a repeater functionality that is only currently supported with a paid ACF extension.

Step 3: Linking Custom Fields to General Options

Now that we have the custom fields set up in ACF, our next task is to map them our custom General Options page. Really, this step comes as the custom fields are bring created. ACF provides settings for each field group that allows you to specify whether the fields should be displayed on specific pages.

In other words, for each field group we've created, be sure to go back in and confirm that the General Options page is selected so that the fields only display there in WordPress:

Now our General Options page has an actual set of options we can manage!

Please Note:: The way the data is organized in the example files is how I've grown accustomed to managing scheme. You may find it easier to organize the fields in other ways, and that's totally cool. And, of course, if you are working with a different content type than this local business example, then you may not need all of the fields we are working with here or be required to use others.

Step 4: Enter Data

Alright, without data, our structured data would just be ... um, structured? Whatever that would be, let's enter the data.

  • Company Logo: Google specifies the ideal size to be 151px square. Google will use this image if it displays company information to the right of the search results. You can see this in action by searching a well-known company, like Google itself.
  • Building Photo: This can add some interest to the same company profile card where the Company Logo is displayed, but this field also impacts search results within maps. Google recommends a square 200px image.
  • Schema Type: Select the content type for the schema. In this example, we are dealing with a local business, so that is the content type.
  • Address: These are pretty straight-forward text fields and will be used both in search results and the same profile card as the Company Logo.
  • Openings: The specification for opening hours can be found on the schema.org website. The way we've set this up in the example is by using a repeater field that contains four sub-fields to specify the days of the week, the starting open time, the ending open time, and a toggle to distinguish between open and closed time ranges. This should cover all our bases, according to the schema documentation.
  • Special Days: These are holidays (e.g. Christmas) where the business might not be open during its regular operating hours. It's nice that schema provides this flexibility because it allows users to see those exceptions if they happen to be searching on those days.
  • Contact: There are a lot of settings available for contact data. We are putting three of them use here with this example, namely Type (which is used like a business Department, say, Sales or Customer Service), Phone (which is the number to call), and Option (which supports options for TollFree and HearingImpairedSupported
Step 5: Generate the the JSON-LD

This is where the rubber meets the road. If so far we have created a place to manage our data, made the fields for that data, and actually entered the data, then we now need to take that collected data and spit it out into a format that search engines can put to use. Again, the GitHub repo has the finished result of what we're dealing with, but let's dig into that code to see how that data is fetched from ACF and converted to JSON-LD.

To read all the values and create the JSON-LD tag, we need to go into the functions.php file and write a snippet that injects our JSON data to the site header. We're going to inject the content type, address, and some data about the site that already exists in WordPress, such as the site name and address:

// Using `wp_head` to inject to the document <head> add_action('wp_head', function() { $schema = array( // Tell search engines that this is structured data '@context' => "http://schema.org", // Tell search engines the content type it is looking at '@type' => get_field('schema_type', 'options'), // Provide search engines with the site name and address 'name' => get_bloginfo('name'), 'url' => get_home_url(), // Provide the company address 'telephone' => '+49' . get_field('company_phone', 'options'), //needs country code 'address' => array( '@type' => 'PostalAddress', 'streetAddress' => get_field('address_street', 'option'), 'postalCode' => get_field('address_postal', 'option'), 'addressLocality' => get_field('address_locality', 'option'), 'addressRegion' => get_field('address_region', 'option'), 'addressCountry' => get_field('address_country', 'option') ) ); }

The logo is not really a required bit of information, we we're going to check whether it exists, then fetch it if it does and add it to the mix:

// If there is a company logo... if (get_field('company_logo', 'option')) { // ...then add it to the schema array $schema['logo'] = get_field('company_logo', 'option'); }

Working with repeater fields in ACF requires a little extra consideration, so we're going to have to write a loop to fetch and add the social media links:

// Check for social media links if (have_rows('social_media', 'option')) { $schema['sameAs'] = array(); // For each instance... while (have_rows('social_media', 'option')) : the_row(); // ...add it to the schema array array_push($schema['sameAs'], get_sub_field('url')); endwhile; }

Adding the data from the Opening Hours fields is a little tricky, but only because we added that additional differentiation between open and closed time ranges. Basically, we need to check for the $closed variable we set up as part of the field then output the times so they fall in right group.

// Let's check for Opening Hours rows if (have_rows('opening_hours', 'option')) { // Then set up the array $schema['openingHoursSpecification'] = array(); // For each row... while (have_rows('opening_hours', 'option')) : the_row(); // ...check if it's marked "Closed"... $closed = get_sub_field('closed'); // ...then output the times $openings = array( '@type' => 'OpeningHoursSpecification', 'dayOfWeek' => get_sub_field('days'), 'opens' => $closed ? '00:00' : get_sub_field('from'), 'closes' => $closed ? '00:00' : get_sub_field('to') ); // Finally, push this array to the schema array array_push($schema['openingHoursSpecification'], $openings); endwhile; }

We can use almost the same snippet to output our Special Days data:

// Let's check for Special Days rows if (have_rows('special_days', 'option')) { // For each row... while (have_rows('special_days', 'option')) : the_row(); // ...check if it's marked "Closed"... $closed = get_sub_field('closed'); // ...then output the times $special_days = array( '@type' => 'OpeningHoursSpecification', 'validFrom' => get_sub_field('date_from'), 'validThrough' => get_sub_field('date_to'), 'opens' => $closed ? '00:00' : get_sub_field('time_from'), 'closes' => $closed ? '00:00' : get_sub_field('time_to') ); // Finally, push this array to the schema array array_push($schema['openingHoursSpecification'], $special_days); endwhile; }

The last piece is our Contact Information data. Again, we're working with a loop that creates and array that then gets injected into the schema array which, in turn gets injected into the document <head>.

Notice that the phone number needs the country code, which you can swap out for your own:

// Let's check for Contact Information rows if (get_field('contact', 'options')) { // Then create an array of the data, if it exists $schema['contactPoint'] = array(); // For each row of contact information... while (have_rows('contact', 'options')) : the_row(); // ...fetch the following fields $contacts = array( '@type' => 'ContactPoint', 'contactType' => get_sub_field('type'), 'telephone' => '+49' . get_sub_field('phone') ); // Let's not forget the Option field if (get_sub_field('option')) { $contacts['contactOption'] = get_sub_field('option'); } // Finally, push this array to the schema array array_push($schema['contactPoint'], $contacts); endwhile; } Let's Marvel at Out Work!

We now can encode our data in JSON and put into a script tag right before the closing of our add_action function.

echo '<script type="application/ld+json">' . json_encode($schema) . '</script>';

The final script might look something like like this:

<script type="application/ld+json"> { "@context": "http://schema.org", "@type": "Store", "name": "My Store", "url": "https://my-domain.com", "telephone": "+49 1234 567", "address": { "@type": "PostalAddress", "streetAddress": "Musterstraße", "postalCode": "13123", "addressLocality": "Berlin", "addressRegion": "Berlin", "addressCountry": "Deutschland" }, "sameAs": ["https://facebook.com/my-profile"], "openingHoursSpecification": [{ "@type": "OpeningHoursSpecification", "dayOfWeek": ["Mo", "Tu", "We", "Th", "Fr"], "opens": "07:00", "closes": "20:00" }, { "@type": "OpeningHoursSpecification", "dayOfWeek": ["Sa", "Su"], "opens": "00:00", "closes": "00:00" }, { "@type": "OpeningHoursSpecification", "validFrom": "2017-08-12", "validThrough": "2017-08-12", "opens": "10:00", "closes": "12:00" }], "contactPoint": [{ "@type": "ContactPoint", "contactType": "customer support", "telephone": "+491527381923", "contactOption": ["HearingImpairedSupported"] }] } </script> Conclusion

Hey, look at that! Now we can enhance a website's search engine presence with optimized data that allows search engines to crawl and interpret information in an organized way that promotes better user experience.

Of course, this example was primarily focused on JSON-LD, Google's schema specifications and using WordPress as a vehicle for managing and generating data. If you have written up ways of managing and handling data on other formats, using different specs and other content management systems, please share it here in the comments and we can start to get a bigger picture for improving SEO all around.

Working with Schemas in WordPress is a post from CSS-Tricks

Breaking the Grid

Css Tricks - Tue, 09/05/2017 - 1:32pm

If you thought CSS Grid solves issues where overflowed content escaping the confines of a horizontal layout, then think again. Dave Rupert writes up two ways he unintentionally broke outside the grid and how he wrangled things back into place.

As a Front-End developer nothing bothers me more than seeing an unexpected horizontal scrollbar on a website. While building out a checkout layout with CSS Grid I was surprised to find something mysterious was breaking the container. I thought Grid sort of auto-solved sizing.

Eventually I found two ways to break CSS Grid. As it would happen, I was doing both in the same layout.

Turns out these special cases boil down to:

  • Using overflow-x on an grid element
  • Using grid on form controls (or, more specifically, replaced elements)

Dave's solution is a set of CSS rules affectionately named Fit Grid, which is a helper class that effectively removes and replaces the automated min-width: auto property assigned to grid items. This is a super helpful resource, though he admits it toes the line of "Clearfix 2.0" territory.

Direct Link to ArticlePermalink

Breaking the Grid is a post from CSS-Tricks

Improving your web font performance

Nice Web Type - Tue, 09/05/2017 - 9:14am

We work hard to deliver the best performance by continuously updating and improving our web font service. In the last few years, we’ve added support for asynchronous font loading, language based subsets, HTTP/2, and just last week CSS kits.

But there’s even more you can do on your end to improve performance, which is just one of the topics I get into in the Webfont Handbook — released earlier today with A Book Apart. If you aren’t sure where to begin with your own site, these three optimization tips are a great place to start. I’ll walk through these in a little detail today, but do check out the book for a whole lot more.

1. Review your font usage

The default JavaScript embed code will load all fonts and variations in a kit, even if you don’t use them. You can significantly reduce your kit size if you remove fonts and variations you don’t use.

While you’re in the kit editor, take the opportunity to take a look at your subsetting options. The “All Characters” subset delivers the entire font to your site and usually results in a large kit size. You can reduce the size of your kit by switching to the Default subset, or by using a language-based subset.

It’s worth pointing out that subsetting can also be very dangerous. If you accidentally remove characters that you actually need they’ll show up in a fallback font. When in doubt, the Default subset with OpenType Features checked is the right choice.

2. Load fonts and kits asynchronously

The default JavaScript embed code will load the JavaScript kit in a render-blocking way. However, once the JavaScript loads, the kit will load the fonts asynchronously. Why wait for the JavaScript to load? You’ll get better performance and the same behavior by switching to the advanced embed code; the advanced embed code will load both the fonts and JavaScript asynchronously.

One downside of loading fonts asynchronously is that you’ll need to manage the flash of unstyled text (FOUT) yourself. Typekit has excellent documentation on font events, and the Webfont Handbook goes into great detail on tricks to minimise FOUT.

3. Preload and preconnect

Web fonts are a critical component of your site’s performance; you want your content to appear as soon as possible and preferably in the correct font. You can help the browser prioritize resources by using preconnect and preload resource hints.

Preconnect is used to tell the browser that you’ll soon connect to a hostname. Once the browser sees the preconnect hint, it opens a connection in the background, so it’s ready to use.

Then by the time the browser comes across the Typekit embed code (you’re using the advanced embed code, right?), it can re-use the connection to Typekit’s font network. Doing this can easily save several seconds.

Preload is another resource hint, which not only creates a connection but actually downloads the resource as well so it’s right there when you need it. This can be useful to preload Typekit’s JavaScript or CSS file.

<link rel="preload" href="https://use.typekit.net/abc1def.js" as="script" crossorigin>

<link rel="preload" href="https://use.typekit.net/abc1def.css" as="style" crossorigin>

Preconnect and preload hints are especially useful when you’re using the advanced embed code or a CSS kit. The browser will create a connection, or fetch the kit JavaScript with high priority in the background without blocking rendering. You get the benefits of asynchronous loading and the performance of render-blocking resources.

The Webfont Handbook is packed with insights that came from several years of looking into and following these issues — not only web font performance, but also licensing, text rendering, CSS syntax, and more. If your work regularly involves type on the web, the Webfont Handbook just might be your new go-to guide.


Building a design system for HealthCare.gov

Css Tricks - Tue, 09/05/2017 - 7:50am

Sawyer Hollenshead has written up his thoughts about how he collaborated with the designers and developers on the HealthCare.gov project.

In this post, I’d like to share some of the bigger technical decisions we made while building that design system. Fortunately, a lot of smart folks have already put a lot of thought into the best approaches for building scalable, developer-friendly, and flexible design systems. This post will also shine a light on those resources we used to help steer the technical direction.

There's a lot going on in here, from guidelines on code architecture and documentation to build systems and versioning. In other words, there's a lot of great detail on the inner workings of a massive public project that many of us are at least outwardly familiar with.

Interesting to note that this project is an offshoot of the United States Design Systems project, but tailored specifically for the Centers of Medicare & Medicaid Services, which oversees HealthCare.gov.

Direct Link to ArticlePermalink

Building a design system for HealthCare.gov is a post from CSS-Tricks

When Design Becomes Part of the Code Workflow

Css Tricks - Tue, 09/05/2017 - 3:55am

I recently did an experiment where I created the same vector illustration in three different applications, exported the illustration as SVG in each application, then wrote a post comparing the exported code.

While I loved the banter and insights that came in the comments, I was surprised that the bulk of conversation was centered on the file size of the compiled SVG.

I wasn't surprised because performance and SVG do not go hand-in-hand or that performance isn't the sort of thing we generally care about in the front-end community. I was surprised because my personal takeaway from the experiment was a reminder that SVG code is code at the end of the day and that the way we create SVG in applications is now more a part of the front-end workflow than perhaps it has been in the past.

I still believe that is the key point from the post and wanted to write a follow-up that not only more clearly articulates it, but also details how we may need to change the way we think about design deliverables for projects that use SVG.

The gap between design and code is getting narrower

We already know this and have extolled the virtues of designers who know how to code. However, what the SVG experiment revealed to me is that those virtues are no longer so much an ideal as much as they are a growing necessity.

If a project calls for SVG and a designer has been tasked with creating illustrations and providing design assets for development, then the designer is no longer handing over a static file, but a snippet of code and, depending on the scope of the project, that code may very well be inlined or injected directly into the HTML document.

Sure, we can intervene and check the code that is provided. We may even run it through a tool like SVGOMG or have automated tasks that help clean and optimize the code before it gets served to production. That is all great, but does not change the fact that what we were delivered in the first place was a piece of code and that there is now an additional consideration in our workflow to code review a design asset.

That's a significant change. It's not a bad change or even true in all scenarios, but it is a significant one for no reason more than it requires a change in how we think about, request, and handle design deliverables on a project.

A new era of design etiquette is upon us

I was one of many, many fans of the Photoshop Etiquette site when I learned about it. It not only struck about a dozen nerves that rang true to my own experiences working with other designers on web projects, but forced me to re-examine and improve my own design practices for the benefit of working within teams. Tips like nicely organized layers with a consistent naming convention make a world of difference when a file is handed off from one person to another, much like nicely documented CSS that uses consistent naming conventions and is generous with comments.

SVG makes these tips much more about necessity than etiquette. Again, now that we have a design deliverable that becomes code, the decisions a designer makes—from configuring an artboard to how the layers are grouped and named—all influence how the SVG code is compiled and ultimately used in production.

Perhaps it's time for an offshoot of Photoshop Etiquette that is more squarely focused on SVG design deliverables using illustration applications.

Applications are super smart, but still need human intervention

My favorite comment from the previous post was a manually coded rendition of the SVG illustration. The code was much cleaner and way more efficient than any of the versions generated by the applications being compared.

Whether or not it was the point of the comment, what I love most about it is how it proves we cannot always take what applications give us for granted. It's freaking amazing that an application like Sketch can take a series of shapes I draw on a screen and turn them into valid and working code, but is it the best code for the situation? It could be. Then again, the commenter proved that it could be done better if the goal was a smaller file size and more readable code.

All three of the applications I tested are remarkably smart, incredibly useful, and have unique strengths that make each one a legitimate and indispensable tool in anyone's web development arsenal. The point here is not to distrust them or stay away from using them.

The point is that they are only as smart as the people using them. If we give them bad shapes and disorganized layers, then we can likely expect less-than-optimal code in return. I would go so far as to say that my method for creating the illustration in the experiment likely influenced the final output in all three cases and may not have given the applications the best shot for generating stellar code.

Either way, it took a human reviewing that generated code and optimizing it by hand to make the point.

Wrapping Up

I want to give a big ol' thanks to everyone who commented on the previous post. What started as a simple personal curiosity became a more nuanced experiment and I was stoked to see it spark healthy debate and insightful ideas. It was those comments and some ensuing offline conversations that made me think deeper about about the the hand-off between design and development which ultimately wound up being the key takeaway from the entire exercise.

When Design Becomes Part of the Code Workflow is a post from CSS-Tricks

Custom Elements Everywhere

Css Tricks - Mon, 09/04/2017 - 5:06am

Custom Elements Everywhere is a site created by Rob Dodson. It displays the results of a set of tests that check JS frameworks that use Custom Elements and Shadow DOM for interoperability issues.

It could look like a report card at first glance, but the description at the top of the site nicely sums up the goal of comparing frameworks:

This project runs a suite of tests against each framework to identify interoperability issues, and highlight potential fixes already implemented in other frameworks. If frameworks agree on how they will communicate with Custom Elements, it makes developers' jobs easier; they can author their elements to meet these expectations.

Nice! Consensus and consistency are exactly what Custom Elements needs in light of the official spec being in working draft and the surge in JS frameworks using them.

Direct Link to ArticlePermalink

Custom Elements Everywhere is a post from CSS-Tricks

Switching Your Site to HTTPS on a Shoestring Budget

Css Tricks - Mon, 09/04/2017 - 4:17am

Google's Search Console team recently sent out an email to site owners with a warning that Google Chrome will take steps starting this October to identify and show warnings on non-secure sites that have form inputs.

Here's the notice that landed in my inbox:

The notice from the Google Search Console team regarding HTTPS support

If your site URL does not support HTTPS, then this notice directly affects you. Even if your site does not have forms, moving over to HTTPS should be a priority, as this is only one step in Google's strategy to identify insecure sites. They state this clearly in their message:

The new warning is part of a long term plan to mark all pages served over HTTP as "not secure".

Current Chrome's UI for a site with HTTP support and a site with HTTPS

The problem is that the process of installing SSL certificates and transitioning site URLs from HTTP to HTTPS—not to mention editing all those links and linked images in existing content—sounds like a daunting task. Who has time and wants to spend the money to update a personal website for this?

I use GitHub Pages to host a number sites and projects for free—including some that use custom domain names. To that end, I wanted to see if I could quickly and inexpensively convert a site from HTTP to HTTPS. I wound up finding a relatively simple solution on a shoestring budget that I hope will help others. Let's dig into that.

Enforcing HTTPS on GitHub Pages

Sites hosted on GitHub Pages have a simple setting to enable HTTPS. Navigate to the project's Settings and flip the switch to enforce HTTPS.

The GitHub Pages setting to enforce HTTPS on a project But We Still Need SSL

Sure, that first step was a breeze, but it's not the full picture of what we need to do to meet Google's definition of a secure site. The reason is that enabling the HTTPS setting neither provides nor installs a Secure Sockets Layer (SSL) certificate to a site that uses a custom domain. Sites that use the default web address provided by GitHub Pages are fully secure with that setting, but those of us that use a custom domain have to go the extra step of securing SSL at the domain level.

That's a bummer because SSL, while not super expensive, is yet another cost and likely one you may not want to incur when you're trying to keep costs down. I wanted to find a way around this.

We Can Get SSL From a CDN ... for Free!

This is where Cloudflare comes in. Cloudflare is a Content Delivery Network (CDN) that also provides distributed domain name server services. What that means is that we can leverage their network to set up HTTPS. The real kicker is that they have a free plan that makes this all possible.

It's worth noting that there are a number of good posts here on CSS-Tricks that tout the benefits of a CDN. While we're focused on the security perks in this post, CDNs are an excellent way to help reduce server burden and increase performance.

From here on out, I'm going to walk through the steps I used to connect Cloudflare to GitHub Pages so, if you haven't already, you can snag a free account and follow along.

Step 1: Select the "+ Add Site" option

First off, we have to tell Cloudflare that our domain exists. Cloudflare will scan the DNS records to verify both that the domain exists and that the public information about the domain are accessible.

Cloudflare's "Add Website" Setting Step 2: Review the DNS records

After Cloudflare has scanned the DNS records, it will spit them out and display them for your review. Cloudflare indicates that it believes things are in good standing with an orange cloud in the Status column. Review the report and confirm that the records match those from your registrar. If all is good, click "Continue" to proceed.

The DNS record report in Cloudflare Step 3: Get the Free Plan

Cloudflare will ask what level of service you want to use. Lo and behold! There is a free option that we can select.

Cloudflare's free plan option Step 4: Update the Nameservers

At this point, Cloudflare provides us with its server addresses and our job is to head over to the registrar where the domain was purchased and paste those addresses into the DNS settings.

Cloudflare provides the nameservers for updated the registrar settings.

It's not incredibly difficult to do this, but can be a little unnerving. Your registrar likely has instructions for how to do this. For example, here are GoDaddy's instructions for updating nameservers for domains registered through their service.

Once you have done this step, your domain will effectively be mapped to Cloudflare's servers, which will act as an intermediary between the domain and GitHub Pages. However, it is a bit of a waiting game and can take Cloudflare up to 24 hours to process the request.

If you are using GitHub Pages with a subdomain instead of a custom domain, there is one extra step you are required to do. Head over to your GitHub Pages settings and add a CNAME record in the DNS settings. Set it to point to <your-username>.github.io, where <your-username> is, of course, your GitHub account handle. Oh, and you will need to add a CNAME text file to the root of your GitHub project which is literally a text file named CNAME with your domain name in it.

Here is a screenshot with an example of adding a GitHub Pages subdomain as a CNAME record in Cloudflare's settings:

Adding a GitHub Pages subdomain to Cloudflare Step 5: Enable HTTPS in Cloudflare

Sure, we've technically already done this in GitHub Pages, but we're required to do it in Cloudflare as well. Cloudflare calls this feature "Crypto" and it not only forces HTTPS, but provides the SSL certificate we've been wanting all along. But we'll get to that in just a bit. For now, enable Crypto for HTTPS.

The Crypto option in Cloudflare's main menu

Turn on the "Always use HTTPS" option:

Enable HTTPS in the Cloudflare settings

Now any HTTP request from a browser is switched over to the more secure HTTPS. We're another step closer to making Google Chrome happy.

Step 6: Make Use of the CDN

Hey, we're using a CDN to get SSL, so we may as well take advantage of its performance benefits while we're at it. We can speed up performance by reducing files automatically and extend browser cache expiration.

Select the "Speed" option in the settings and allow Cloudflare to auto minify our site's web assets:

Allow Cloudflare to minify the site's web assets

We can also set the expiration on browser cache to maximize performance:

Set the browser cache in Cloudflare's Speed settings

By moving the expiration out date a longer than the default option, the browser will refrain itself from asking for a site's resources with each and every visit—that is, resources that more than likely haven't been changed or updated. This will save visitors an extra download on repeat visits within a month's time.

Step 7: Make External Resource Secure

If you use external resources on your site (and many of us do), then those need to be served securely as well. For example, if you use a Javascript framework and it is not served from an HTTP source, that blows our secure cover as far as Google Chrome is concerned and we need to patch that up.

If the external resource you use does not provide HTTPS as a source, then you might want to consider hosting it yourself. We have a CDN now that makes the burden of serving it a non-issue.

Step 8: Activate SSL

Woot, here we are! SSL has been the missing link between our custom domain and GitHub Pages since we enabled HTTPS in the GitHub Pages setting and this is where we have the ability to activate a free SSL certificate on our site, courtesy of Cloudflare.

From the Crypto settings in Cloudflare, let's first make sure that the SSL certificate is active:

Cloudflare shows an active SSL certificate in the Crypto settings

If the certificate is active, move to "Page Rules" in the main menu and select the "Create Page Rule" option:

Create a page rule in the Cloudflare settings

...then click "Add a Setting" and select the "Always use HTTPS" option:

Force HTTPS on that entire domain! Note the asterisks in the formatting, which is crucial.

After that click "Save and Deploy" and celebrate! We now have a fully secure site in the eyes of Google Chrome and didn't have to touch a whole lot of code or drop a chunk of change to do it.

In Conclusion

Google's push for HTTPS means front-end developers need to prioritize SSL support more than ever, whether it's for our own sites, company sites, or client sites. This move gives us one more incentive to make the move and the fact that we can pick up free SSL and performance enhancements through the use of a CDN makes it all the more worthwhile.

Have you written about your adventures moving to HTTPS? Let me know in the comments and we can compare notes. Meanwhile, enjoy a secure and speedy site!

Switching Your Site to HTTPS on a Shoestring Budget is a post from CSS-Tricks

Problem space

Css Tricks - Fri, 09/01/2017 - 3:29am

Speaking of utility libraries, Jeremy Keith responded to Adam Wathan's article that we linked to not long ago. Jeremey is with him through the first four "phases", but can't come along for phase 5, the one about going all-in on utility libraries:

At this point there is no benefit to even having an external stylesheet. You may as well use inline styles. Ah, but Adam has anticipated this and counters with this difference between inline styles and having utility classes for everything:

You can’t just pick any value want; you have to choose from a curated list.

Right. But that isn’t a technical solution, it’s a cultural one. You could just as easily have a curated list of allowed inline style properties and values. If you are in an environment where people won’t simply create a new utility class every time they want to style something, then you are also in an environment where people won’t create new inline style combinations every time they want to style something.

I think Adam has hit on something important here, but it’s not about utility classes. His suggestion of “utility-first CSS” will only work if the vocabulary is strictly adhered to. For that to work, everyone touching the code needs to understand the system and respect the boundaries of it.

Direct Link to ArticlePermalink

Problem space is a post from CSS-Tricks

Best Way to Programmatically Zoom a Web Application

Css Tricks - Fri, 09/01/2017 - 3:22am

Website accessibility has always been important, but nowadays, when we have clear standards and regulations from governments in most countries, it's become even more crucial to support those standards and make our projects as accessible as they can be.

The W3C recommendation provides 3 level of conformance: A, AA and AAA. To be at the AA level, among other requirements, we have to provide a way to increase the site's font size:

1.4.4 Resize text: Except for captions and images of text, text can be resized without assistive technology up to 200 percent without loss of content or functionality. (Level AA)

Let's look at solutions for this and try to find the best one we can.

Incomplete Solution?: ?CSS zoom

The first word which comes up when we talk about size changing is zoom. CSS has a zoom property and it does exactly what we want?—?increases size.

Let's take a look at a common design pattern (that we'll use for the rest of this article): a horizontal bar of navigation that turns into a menu icon at a certain breakpoint:

This is what we want to happen. No wrapping and the entire menu turns into a menu icon at a specified breakpoint.

The GIF below shows what we get with zoom approach applied to the menu element. I created a switcher which allows selecting different sizes and applies an appropriate zoom level:

Check out the Pen here if you want to play with it.

The menu goes outside visible area because we cannot programmatically increase viewport width with zoom nor we can wrap the menu because of the requirement. The menu icon doesn't appear either, because the screen size hasn't actually changed, it's the same as before we clicked the switcher.

All these problems, plus, zoom is not supported by Firefox at all anyway.

Wrong Solution: Scale Transforms

We can get largely the same effect with transform: scale as we got with zoom. Except, transform is more widely supported by browsers. Still, we get the exact same problems as we got with zoom: the menu doesn't fit into the visible area, and worse, it goes beyond the vertical visible area as well because page layout is calculated based on an initial 1-factor scale.

See the Pen Font-switcher--wrong-scale by Mikhail Romanov (@romanovma) on CodePen.

Another Incomplete Solution?: ?rem and html font-size

Instead of zooming or scaling, we could use rem as the sizing unit for all elements on the page. We can then change their size by altering html element's font-size property, because by its definition 1rem equals to html's font-size value.

This is a fairly good solution, but not quite perfect. As you can see in the following demo, it has the same issues as previous examples: at one point it doesn't fit horizontally because required space is increased but the viewport width stays intact.

See the Pen Font-switcher--wrong-rem by Mikhail Romanov (@romanovma) on CodePen.

The trouble, in a sense, is that the media queries don't adjust to the change in size. When we go up in size, the media queries should adjust accordingly so that the effect at the same place would happen before the size change, relative to the content.

Working Solution: ?Emulate Browser Zoom with Sass mixin

To find inspiration, let's see how the native browser zoom feature handles the problem:

Wow! Chrome understands that zooming actually does change the viewport. The larger the zoom, the narrower the viewport. Meaning that our media queries will actually take effect like we expect and need them to.

One way to achieve this (without relying on native zoom, because there is no way for us to access that for our on on-page controls as required by AA) is to somehow update the media query values every time we switch the font size.

For example, say we have a media query breakpoint at 1024px and we perform a 200% zoom. We should update that breakpoint to 2048px to compensate for the new size.

Shouldn't this be easy? Can't we just set the media queries with rem units so that when we increase the font-size the media queries automatically adjust? Sadly no, that approach doesn't work. Try to update media query from px to rem in this Pen and see that nothing changes. The ?layout doesn't switch breakpoints after increasing the size. That is because, according to standards, both rem and em units in media queries are calculated based on the initial value of html element font-size which is normally 16px (and can vary).

Relative units in media queries are based on the initial value, which means that units are never based on results of declarations. For example, in HTML, the em unit is relative to the initial value of 'font-size.

We can use power of Sass mixins to get around this though! Here is how we'll do it:

  • we'll use a special class on html element for each size(font-size--s, font-size?--?m, font-size--l, font-size?--?xl, etc.)
  • we'll use a special mixin, which creates a media query rule for every combination of breakpoint and size and which takes into account both screen width and modifier class applied to html element
  • we'll wrap code with this mixin everywhere we want to apply a media-query.

Here is how this mixin looks:

$desktop: 640px; $m: 1.5; $l: 2; $xl: 4; // the main trick is here. We increase the min-width if we increase the font-size @mixin media-desktop { html.font-size--s & { @media (min-width: $desktop) { @content; } } html.font-size--m & { @media (min-width: $desktop * $m) { @content; } } html.font-size--l & { @media (min-width: $desktop * $l) { @content; } } html.font-size--xl & { @media (min-width: $desktop * $xl) { @content; } } } .menu { @include media-desktop { &__mobile { display: none; } } }

And an example of the CSS it generates:

@media (min-width: 640px) { html.font-size--s .menu__mobile { display: none; } } @media (min-width: 960px) { html.font-size--m .menu__mobile { display: none; } } @media (min-width: 1280px) { html.font-size--l .menu__mobile { display: none; } } @media (min-width: 2560px) { html.font-size--xl .menu__mobile { display: none; } }

So if we have n breakpoints and m sizes, we will generate n times m media query rules, and that will cover all possible cases and will give us desired ability to use increased media queries when the font size is increased.

Check out the Pen below to see how it works:

See the Pen Font-switcher--right by Mikhail Romanov (@romanovma) on CodePen.

Drawbacks

There are some drawbacks though. Let's see how we can handle them.

Increased specificity on media-query selectors.

All code inside the media query gets additional level of specificity because it goes inside html.font-size?—?x selector. So if we go with the mobile first approach and use, for example, .no-margin modifier on an element then desktop normal style can win over the modifier and desktop margins will be applied.

To avoid this we can create the same mixin for mobile and wrap with our mixins not only desktop but also mobile CSS code. That will balance specificity.

Other ways are to handle every special case with an individual approach by artificially increasing specificity, or creating mixin with desired functionality(no margin in our example) and putting it not for mobile only but also into every breakpoint code.

Increased amount of generated CSS.

Amount of generated CSS will be higher because we generate same CSS code for every size.

This shouldn't be an issue if files are compressed with gzip (and that is usually the case) because it handles repeated code very well.

Some front-end frameworks like Zurb Foundation use built-in breakpoints in JavaScript utilities and CSS media queries.

That is a tough one. Personally, I try to avoid the features of a framework which depends on the screen width. The main one which can be often missed is a grid system, but with the rise of flexbox and grid, I do not see it to be an issue anymore. Check out this great article for more details on how to build your own grid system.

But if a project depends on a framework like this, or we don't want to fight the specificity problem but still want to go with AA, then I would consider getting rid of fixed height elements and using rems together with altering the html element font-size to update the layout and text dimensions accordingly.

Thank you for reading! Please let me know if this helps and what other issues you faced conforming to the 1.4.4 resize text W3C Accessibility requirement.

Best Way to Programmatically Zoom a Web Application is a post from CSS-Tricks

A Book Apart

Css Tricks - Thu, 08/31/2017 - 2:57am

(This is a sponsored post.)

You know A Book Apart! They've published all kinds of iconic books in our field. They are short and to the point. The kind of book you can read in a single flight.

I wrote on not so long ago called Practical SVG. Fortunately for us both, SVG isn't the most fast-moving technology out there, so reading this book now and using what you learn is just as useful now as it ever was.

More interested in JavaScript, they got it. HTML? CSS? Typography? Responsive Design? All covered. In fact, you should probably just browse the library yourself, or get them all.

Better still, now is the time to do it, because 15% of all sales will directly benefit those affected by Hurrican Harvey.

Direct Link to ArticlePermalink

A Book Apart is a post from CSS-Tricks

How to Write Better Code: The 3 Levels of Code Consistency

Css Tricks - Thu, 08/31/2017 - 2:46am

When working on an article about user-centered web development I ended up exploring a bit more the topic of consistency in code. Consistency is one of the key reasons why we need coding guidelines and also a factor for code quality. Interestingly enough, so I noted, there are three levels of consistency: individual, collective, and institutional.

Level 1: Individual Consistency

At a basic level, when there's little standardization in our organization (or when we simply work alone), consistency simply means to be consistent with ourselves. We benefit from always formatting code the same way.

If we, just the one of us each, usually omit unneeded quotes around attribute values as is absolutely valid, as such projects prove, we should always do so. If we prefer not to end the last declaration in a CSS rule with a semicolon, we should never do so. If we prefer to always use tabs, we should always do so.

Level 2: Collective Consistency

At the next level, and here we assume there to be code from other developers or third parties, consistency means to follow the code style used wherever we touch code. We should respect and stay consistent with the code style prevalent in the files we touch.

When we help our colleague launch a site and tweak their CSS, we format the code the way they did. When we tweak some core files of our content management systems (if that was so advisable), we do what they do. When we write a new plugin for something, we do it the way other plugins are written.

Level 3: Institutional Consistency

Finally, normally a level reached in bigger organizations, consistency means adhering to (or first creating) the organization’s coding guidelines and style guides. If the guidelines are well-established and well-enforced, this type of consistency offers the most power to also effect changes in behavior—individual consistency offers almost no incentive for that, collective consistency only temporarily.

When we normally indent with spaces and the corp style guide says tabs, we use tabs. When our colleagues launch their mini-project and when helping out, we discover their code not to be compliant with the corporate guidelines, we take the time to refactor it. When we start something new, perhaps based on some different language, we kick off a guideline setup and standardization process.

These Consistency Levels Are Not Mutually Exclusive

In our own affairs, we should at least strive for level 1, but personally I’ve made great experience hooking myself up to some external level 3 standard (I’m following Google's HTML/CSS guidelines with the only exception of using tabs instead of spaces) and defining, in detail, some complementary level 1-style standard (as with a predefined selector order).

Whenever we deal with other developers, but only if there’s lack of a wider standard, we should at least aim for level 2 consistency, that is, respect their code. We touch something in their domain, we write it like they would do.

When we are in a bigger organization — though "bigger" can truly start at two people — this same idea of level 2 consistency prevails, but we can now think of setting up standards to operate at level 3. There, we can even marry the two levels: Follow the coding guidelines, but when we touch something that violates the guidelines and we don’t have the time to reformat it, we follow the style prevalent in that code.

From my experience, being aware of these levels alone helps a great deal writing more consistent, and with that quite better code.

If you like to learn more about coding standards, check out other CSS-Tricks posts about the topic, and if you like a short, very short read about them, perhaps also The Little Book of HTML/CSS Coding Guidelines.

How to Write Better Code: The 3 Levels of Code Consistency is a post from CSS-Tricks

Now in Early Access: Serve web fonts without JavaScript

Nice Web Type - Wed, 08/30/2017 - 6:06am

We’re excited to ship one of your all-time most requested features: you can now add fonts to your web site using only CSS (Cascading Style Sheets)—no JavaScript required. Also, you can now use fonts from Typekit in HTML email campaigns, mobile articles in Google’s AMP format, or anywhere else CSS web fonts are supported.

Turn on Early Access and you’ll see the new CSS-only embed code in our kit editor, available as an HTML link tag or CSS @import statement. Your existing websites and kits will continue to work with the default JavaScript embed code, and you will now be presented with the new CSS embed code whenever you create a new kit, or when you access an existing kit’s embed code.

CSS kits will finally allow you to use web fonts from Typekit in places our previous reliance on JavaScript prevented us from supporting, such as:

  • HTML email. You can now use fonts from Typekit in emails. Many email clients support HTML and CSS, but not JavaScript. Style your email campaigns with beautiful typography from the Typekit library, and stand out from the rest.
  • Google AMP. You don’t have to sacrifice style for speed – use your Typekit web fonts with mobile article formats to reach a wider audience. Google AMP is now compatible with our CSS embed code.
  • Custom stylesheets. Some web page builders or other web based software (like wikis) allow you to edit CSS but not HTML. You can add fonts from Typekit to those sites by using the @import version of the CSS embed code.

For more details and step-by-step support, check out our guide to getting started with CSS-only web font serving.

Which embed code format should I use?

Either embed code gives you control over the OpenType features and language support in your site’s web fonts; you can still configure these options in our kit editor.

For most web developers, the CSS embed code is the most efficient way to add web fonts to your site. Using only CSS to deliver web fonts allows you to take advantage of newer advances in how browsers load and render fonts, and removing JavaScript code and execution from the loading process should provide a small but welcome speed boost.

The advanced JavaScript embed code is still the right choice for sites that use East Asian web fonts, which depend on our JavaScript-based dynamic subsetting feature for support.

For advanced users or in specific use cases, our JavaScript embed code gives you fine-tuned control over how fonts are loaded.

  • If you don’t want to block the page until fonts are loaded, use our advanced embed code with async: true turned on. This will result in an initial flash of unstyled text (FOUT), but will allow your page content to load immediately, with fonts swapped in as they are loaded by the browser.
  • Network blips, routing failures, or service downtime could all potentially affect your fonts. The advanced embed code gives you control over functionality such as font loading behavior and timeouts.
  • The advanced embed code loads both the JavaScript file and fonts asynchronously into your site for optimal performance.
Browser support

All of the same browsers and versions that currently support JavaScript web font serving will also support CSS web font serving. See a detailed listing of support.

We want your feedback!

We release features into Early Access because we feel confident that they are ready to use, and we’d like you to give it a try, test its limits, and let us know how you feel about the change before it becomes a core part of our product.

Please give us your feedback via the comments, Twitter, or directly to our support team at support@typekit.com. We hope you enjoy the new simplicity of using Typekit in your web projects as much as we do!


Building Skeleton Screens with CSS Custom Properties

Css Tricks - Wed, 08/30/2017 - 2:26am

Designing loading states on the web is often overlooked or dismissed as an afterthought. Performance is not only a developer's responsibility, building an experience that works with slow connections can be a design challenge as well.

While developers need to pay attention to things like minification and caching, designers have to think about how the UI will look and behave while it is in a "loading" or "offline" state.

The illusion of Speed

As our expectations for mobile experiences change, so does our understanding of performance. People expect web apps to feel just as snappy and responsive as native apps, regardless of their current network coverage.

Perceived performance is a measure of how fast something feels to the user. The idea is that users are more patient and will think of a system as faster if they know what's going on and can anticipate content before it's actually there. It's a lot about managing expectations and keeping the user informed.

For a web app, this concept might include displaying "mockups" of text, images or other content elements - called skeleton screens &#x1f480;. You can find these in the wild, used by companies like Facebook, Google, Slack and others:

Holy moly to you too, Slack. Facebook's Skeleton An Example

Say you are building a web app. It's a travel-advice kind of thing where people can share their trips and recommend places, so your main piece of content might look something like this:

You can take that card and reduce it down to its basic visual shapes, the skeleton of the UI component.

Whenever someone requests new content from the server, you can immediately start showing the skeleton, while data is being loaded in the background. Once the content is ready, simply swap the skeleton for the actual card. This can be done with plain vanilla JavaScript or using a library like React.

Now you could use an image to display the skeleton, but that would introduce an additional request and data overhead. We're already loading stuff here, so it's not a great idea to wait for another image to load first. Plus it's not responsive, and if we ever decided to adjust some of the content card's styling, we would have to duplicate the changes to the skeleton image so they'd match again. &#x1f612; Meh.

A better solution is to create the whole thing with just CSS. No extra requests, minimal overhead, not even any additional markup. And we can build it in a way that makes changing the design later much easier.

Drawing Skeletons in CSS

First, we need to draw the basic shapes that will make up the card skeleton. We can do this by adding different gradients to the background-image property. By default, linear gradients run from top to bottom, with different color stop transitions. If we just define one color stop and leave the rest transparent, we can draw shapes.

Keep in mind that multiple background-images are stacked on top of each other here, so the order is important. The last gradient definition will be in the back, the first at the front.

.skeleton { background-repeat: no-repeat; background-image: /* layer 2: avatar */ /* white circle with 16px radius */ radial-gradient(circle 16px, white 99%, transparent 0), /* layer 1: title */ /* white rectangle with 40px height */ linear-gradient(white 40px, transparent 0), /* layer 0: card bg */ /* gray rectangle that covers whole element */ linear-gradient(gray 100%, transparent 0); }

These shapes stretch to fill the entire space, just like regular block-level elements. If we want to change that, we'll have to define explicit dimensions for them. The value pairs in background-size set the width and height of each layer, keeping the same order we used in background-image:

.skeleton { background-size: 32px 32px, /* avatar */ 200px 40px, /* title */ 100% 100%; /* card bg */ }

The last step is to position the elements on the card. This works just like position:absolute, with values representing the left and top property. We can for example simulate a padding of 24px for the avatar and title, to match the look of the real content card.

.skeleton { background-position: 24px 24px, /* avatar */ 24px 200px, /* title */ 0 0; /* card bg */ } Break it up with Custom Properties

This works well in a simple example - but if we want to build something just a little more complex, the CSS quickly gets messy and very hard to read. If another developer was handed that code, they would have no idea where all those magic numbers are coming from. Maintaining it would surely suck.

Thankfully, we can now use custom CSS properties to write the skeleton styles in a much more concise, developer-friendly way - and even take the relationship between different values into account:

.skeleton { /* define as separate properties */ --card-height: 340px; --card-padding:24px; --card-skeleton: linear-gradient(gray var(--card-height), transparent 0); --title-height: 32px; --title-width: 200px; --title-position: var(--card-padding) 180px; --title-skeleton: linear-gradient(white var(--title-height), transparent 0); --avatar-size: 32px; --avatar-position: var(--card-padding) var(--card-padding); --avatar-skeleton: radial-gradient( circle calc(var(--avatar-size) / 2), white 99%, transparent 0 ); /* now we can break the background up into individual shapes */ background-image: var(--avatar-skeleton), var(--title-skeleton), var(--card-skeleton); background-size: var(--avatar-size), var(--title-width) var(--title-height), 100% 100%; background-position: var(--avatar-position), var(--title-position), 0 0; }

Not only is this a lot more readable, it's also way easier to change some of the values later on. Plus we can use some of the variables (think --avatar-size, --card-padding, etc.) to define the styles for the actual card and always keep it in sync with the skeleton version.

Adding a media query to adjust parts of the skeleton at different breakpoints is now also quite simple:

@media screen and (min-width: 47em) { :root { --card-padding: 32px; --card-height: 360px; } }

Browser support for custom properties is good, but not at 100%. Basically, all modern browsers have support, with IE/Edge a bit late to the party. For this specific use case, it would be easy to add a fallback using Sass variables.

Add Animation

To make this even better, we can animate our skeleton, and make it look more like a loading indicator. All we need to do is put a new gradient on the top layer and then animate its position with @keyframes.

Here's a full example of how the finished skeleton card could look:

Skeleton Loading Card by Max Böck (@mxbck) on CodePen.

You can use the :empty selector and a pseudo element to draw the skeleton, so it only applies to empty card elements. Once the content is injected, the skeleton screen will automatically disappear.

More on Designing for Performance

For a closer look at designing for perceived performance, check out these links:

Building Skeleton Screens with CSS Custom Properties is a post from CSS-Tricks

Syndicate content
©2003 - Present Akamai Design & Development.