Tech News

JavaScript Fatigue

Css Tricks - Thu, 08/06/2020 - 11:32am

From Nicholas Zakas’ newsletter, on how he avoids JavaScript fatigue:

 I don’t try to learn about every new thing that comes out. There’s a limited number of hours in the day and a limited amount of energy you can devote to any topic, so I choose not to learn about anything that I don’t think will be immediately useful to me. That doesn’t mean I completely ignore new things as they are released, but rather, I try to fit them into an existing mental model so I can talk about them intelligently. For instance, I haven’t built an application in Deno, but I know that Deno is a JavaScript runtime for building server-side applications, just like Node.js. So I put Deno in the same bucket as Node.js and know that when I need to learn more, there’s a point of comparison.

I too have used the “buckets” analogy. Please allow me to continue and overuse this metaphor.

I rarely try to learn anything just for the sake of it. I learn what I need to learn as whatever I’m working on demands it. But I try to know enough so that when I read tech news, I can place what I’m reading into mental buckets. Then I can, metaphorically, reach into those buckets when those topics come up. Plus, I can keep an eye on how full those buckets are. If I find myself putting a lot of stuff into one bucket, I might plunge in there and see what’s going on as there is clearly something in the water.

You’ll need to subscribe to the newsletter to get to the content.

Direct Link to ArticlePermalink

The post JavaScript Fatigue appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building Custom Data Importers: What Engineers Need to Know

Css Tricks - Thu, 08/06/2020 - 4:37am

Importing data is a common pain-point for engineering teams. Whether its importing CRM data, inventory SKUs, or customer details, importing data into various applications and building a solution for this is a frustrating experience nearly every engineer can relate to. Data import, as a critical product experience is a huge headache. It reduces the time to value for customers, strains internal resources, and takes valuable development cycles away from developing key, differentiating product features.

Frequent error messages end-users receive when importing data. Why do we expect customers to fix this themselves?

Data importers, specifically CSV importers, haven’t been treated as key product features within the software, and customer experience. As a result, engineers tend to dedicate an exorbitant amount of effort creating less-than-ideal solutions for customers to successfully import their data.

Engineers typically create lengthy, technical documentation for customers to review when an import fails. However, this doesn’t truly solve the issue but instead offsets the burden of a great experience from the product to an end-user.

In this article, we’ll address the current problems with importing data and discuss a few key product features that are necessary to consider if you’re faced with a decision to build an in-house solution.

Importing data is typically frustrating for anyone involved at a data-led company. Simply put, there has never been a standard for importing customer data. Until now, teams have deferred to CSV templates, lengthy documentation, video tutorials, or buggy in-house solutions to allow users the ability to import spreadsheets. Companies trying to import CSV data can run into a variety of issues such as:

  • Fragmented data: With no standard way to import data, we get emails going back and forth with attached spreadsheets that are manually imported. As spreadsheets get passed around, there are obvious version control challenges. Who made this change? Why don’t these numbers add up as they did in the original spreadsheet? Why are we emailing spreadsheets containing sensitive data?
  • Improper formatting: CSV import errors frequently occur when formatting isn’t done correctly. As a result, companies often rely on internal developer resources to properly clean and format data on behalf of the client — a process that can take hours per customer, and may lead to churn anyway. This includes correcting dates or splitting fields that need to be healed prior to importing.
  • Encoding errors: There are plenty of instances where a spreadsheet can’t be imported when it’s not improperly encoded. For example, a company may need a file to be saved with UTF-8 encoding (the encoding typically preferred for email and web pages) in order to then be uploaded properly to their platform. Incorrect encoding can result in a lengthy chain of communication where the customer is burdened with correcting and re-importing their data.
  • Data normalization: A lack of data normalization results in data redundancy and a never-ending string of data quality problems that make customer onboarding particularly challenging. One example includes formatting email addresses, which are typically imported into a database, or checking value uniqueness, which can result in a heavy load on engineers to get the validation working correctly.
Remember building your first CSV importer?

When it comes down to creating a custom-built data importer, there are a few critical features that you should include to help improve the user experience. (One caveat – building a data importer can be time-consuming not only to create but also maintain – it’s easy to ensure your company has adequate engineering bandwidth when first creating a solution, but what about maintenance in 3, 6, or 12 months?)

A preview of Flatfile Portal. It integrates in minutes using a few lines of JavaScript. Data mapping

Mapping or column-matching (they are often used interchangeably) is an essential requirement for a data importer as the file import will typically fail without it. An example is configuring your data model to accept contact-level data. If one of the required fields is “address” and the customer who is trying to import data chooses a spreadsheet where the field is labeled “mailing address,” the import will fail because “mailing address” doesn’t correlate with “address” in a back-end system. This is typically ‘solved’ by providing a pre-built CSV template for customers, who then have to manipulate their data, effectively increasing time-to-value during a product experience. Data mapping needs to be included in the custom-built product as a key feature to retain data quality and improve the customer data onboarding experience.

Auto-column matching CSV data is the bread and butter of Portal, saving massive amounts of time for customers while providing a delightful import experience. Data validation

Data validation, which checks if the data matches an expected format or value, is another critical feature to include in a custom data importer. Data validation is all about ensuring the data is accurate and is specific to your required data model. For example, if special characters can’t be used within a certain template, error messages can appear during the import stage. Having spreadsheets with hundreds of rows containing validation errors results in a nightmare for customers, as they’ll have to fix these issues themselves, or your team, which will spend hours on end cleaning data. Automatic data validators allow for streamlining of healing incoming data without the need for a manual review.

We built Data Hooks into Portal to quickly normalize data on import. A perfect use-case would be validating email uniqueness against a database. Data parsing

Data parsing is the process of taking an aggregation of information (in a spreadsheet) and breaking it into discrete parts. It’s the separation of data. In a custom-built data importer, a data parsing feature should not only have the ability to go from a file to an array of discrete data but also streamline the process for customers.

Data transformation

Data transformation means making changes to imported data as it’s flowing into your system to meet an expected or desired value. Rather than sending data back to users with an error message for them to fix, data transformation can make small, systematic tweaks so that the users’ data is more usable in your backend. For example, when transferring a task list, prioritization data could be transformed into a different value, such as numbers instead of labels.

Data Hooks normalize imported customer data automatically using validation rules set in the Portal JSON config. These highly adaptable hooks can be worked to auto-validate nearly any incoming customer data.

We’ve baked all of the above features into Portal, our flagship CSV importer at Flatfile. Now that we’ve reviewed some of the must-have features of a data importer, the next obvious question for an engineer building an in-house importer is typically… should they?

Engineering teams that are taking on this task typically use custom or open source solutions, which may not adhere to specific use-cases. Building a comprehensive data importer also brings UX challenges when building a UI and maintaining application code to handle data parsing, normalization, and mapping. This is prior to considering how customer data requirements may change in future months and the ramifications of maintaining a custom-built solution.

Companies facing data import challenges are now considering integrating a pre-built data importer such as Flatfile Portal. We’ve built Portal to be the elegant import button for web apps. With just a few lines of JavaScript, Portal can be implemented alongside any data model and validation ruleset, typically in a few hours. Engineers no longer need to dedicate hours cleaning up and formatting data, nor do they need to custom build a data importer (unless they want to!). With Flatfile, engineers can focus on creating product-differentiating features, rather than work on solving spreadsheet imports.

Importing data is wrought with challenges and there are several critical features necessary to include when building a data importer. The alternative to a custom-built solution is to look for a pre-built data importer such as Portal.

Flatfile’s mission is to remove barriers between humans and data. With AI-assisted data onboarding, they eliminate repetitive work and make B2B data transactions fast, intuitive, and error-free. Flatfile automatically learns how imported data should be structured and cleaned, enabling customers and teams to spend more time using their data instead of fixing it. Flatfile has transformed over 300 million rows of data for companies like ClickUp, Blackbaud, Benevity, and Toast. To learn more about Flatfile’s products, Portal and Concierge, visit flatfile.io.

The post Building Custom Data Importers: What Engineers Need to Know appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Warp SVG Online

Css Tricks - Wed, 08/05/2020 - 1:15pm

The warping is certainly the cool part here. Some fancy math literally transforms the path data to do the warping. But the UX detail work here is just as nice. Scrolling the page zooms in and out via a transform: scale() on the SVG wrapper (clever!). Likewise, holding the spacebar lets you pan around which is as simple as transform: translate() on another wrapper (smart!). To warp your own SVG files, you just drag-and-drop them on the page (easy!).

Direct Link to ArticlePermalink

The post Warp SVG Online appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 1: Birth

Css Tricks - Wed, 08/05/2020 - 7:09am

Sir Tim Berners-Lee is fascinated with information. It has been his life’s work. For over four decades, he has sought to understand how it is mapped and stored and transmitted. How it passes from person to person. How the seeds of information become the roots of dramatic change. It is so fundamental to the work that he has done that when he wrote the proposal for what would eventually become the World Wide Web, he called it “Information Management, a Proposal.”

Information is the web’s core function. A series of bytes stream across the world and at the end of it is knowledge. The mechanism for this transfer — what we know as the web — was created by the intersection of two things. The first is the Internet, the technology that makes it all possible. The second is hypertext, the concept that grounds its use. They were brought together by Sir Tim Berners-Lee. And when he was done he did something truly spectacular. He gave it away to everyone to use for free.

When Berners-Lee submitted “Information Management, a Proposal” to his superiors, they returned it with a comment on the top that read simply:

Vague, but exciting…

The web wasn’t a sure thing. Without the hindsight of today it looked far too simple to be effective. In other words, it was a hard sell. Berners-Lee was proficient at many things, but he was never a great salesman. He loved his idea for the web. But he had to convince everybody else to love it too.

Sir Tim Berners-Lee has a mind that races. He has been known — based on interviews and public appearances — to jump from one idea to the next. He is almost always several steps ahead of what he is saying, which is often quite profound. Until recently, he only gave a rare interview here and there, and masked his greatest achievements with humility and a wry British wit.

What is immediately apparent is that Sir Tim Berners-Lee is curious. Curious about everything. It has led him to explore some truly revolutionary ideas before they became truly revolutionary. But it also means that his focus is typically split. It makes it hard for him to hold on to things in his memory. “I’m certainly terrible at names and faces,” he once said in an interview. His original fascination with the elements for the web came from a very personal need to organize his own thoughts and connect them together, disparate and unconnected as they are. It is not at all unusual that when he reached for a metaphor for that organization, he came up with a web.

As a young boy, his curiosity was encouraged. His parents, Conway Berners-Lee and Mary Lee Woods, were mathematicians. They worked on the Ferranti Mark I, the world’s first commercially available computer, in the 1950s. They fondly speak of Berners-Lee as a child, taking things apart, experimenting with amateur engineering projects. There was nothing that he didn’t seek to understand further. Electronics — and computers specifically — were particularly enchanting.

Berners-Lee sometimes tells the story of a conversation he had with his with father as a young boy about the limitations of computers making associations between information that was not intrinsically linked. “The idea stayed with me that computers could be much more powerful,” Berners-Lee recalls, “if they could be programmed to link otherwise unconnected information. In an extreme view, the world can been seen as only connections.” He didn’t know it yet, but Berners-Lee had stumbled upon the idea of hypertext at a very early age. It would be several years before he would come back to it.

History is filled with attempts to organize knowledge. An oft-cited example is the Library of Alexandria, a fabled library of Ancient Greece that was thought to have had tens of thousands of meticulously organized texts.

Photo via

At the turn of the century, Paul Otlet tried something similar in Belgium. His project was called the Répertoire Bibliographique Universel (Universal Bibliography). Otlet and a team of researchers created a library of over 15 million index cards, each with a discrete and small piece of information in topics ranging from science to geography. Otlet devised a sophisticated numbering system that allowed him to link one index card to another. He fielded requests from researchers around the world via mail or telegram, and Otlet’s researchers could follow a trail of linked index cards to find an answer. Once properly linked, information becomes infinitely more useful.

A sudden surge of scientific research in the wake of World War II prompted Vanneaver Bush to propose another idea. In his groundbreaking essay in The Atlantic in 1945 entitled “As We May Think,” Bush imagined a mechanical library called a Memex. Like Otlet’s Universal Bibliography, the Memex stored bits of information. But instead of index cards, everything was stored on compact microfilm. Through the process of what he called “associative indexing,” users of the Memex could follow trails of related information through an intricate web of links.

The list of attempts goes on. But it was Ted Neslon who finally gave the concept a name in 1968, two decades after Bush’s article in The Atlantic. He called it hypertext.

Hypertext is, essentially, linked text. Nelson observed that in the real world, we often give meaning to the connections between concepts; it helps us grasp their importance and remember them for later. The proximity of a Post-It to your computer, the orientation of ingredients in your refrigerator, the order of books on your bookshelf. Invisible though they may seem, each of these signifiers hold meaning, whether consciously or subconsciously, and they are only fully realized when taking a step back. Hypertext was a way to bring those same kinds of meaningful connections to the digital world.

Nelson’s primary contribution to hypertext is a number of influential theories and a decades-long project still in progress known as Xanadu. Much like the web, Xanadau uses the power of a network to create a global system of links and pages. However, Xanadu puts a far greater emphasis on the ability to trace text to its original author for monetization and attribution purposes. This distinction, known as transculsion, has been a near impossible technological problem to solve.

Nelson’s interest in hypertext stems from the same issue with memory and recall as Berners-Lee. He refers to it is as his hummingbird mind. Nelson finds it hard to hold on to associations he creates in the real world. Hypertext offers a way for him to map associations digitally, so that he can call on them later. Berners-Lee and Nelson met for the first time a couple of years after the web was invented. They exchanged ideas and philosophies, and Berners-Lee was able to thank Nelson for his influential thinking. At the end of the meeting, Berners-Lee asked if he could take a picture. Nelson, in turn, asked for a short video recording. Each was commemorating the moment they knew they would eventually forget. And each turned to technology for a solution.

By the mid-80s, on the wave of innovation in personal computing, there were several hypertext applications out in the wild. The hypertext community — a dedicated group of software engineers that believed in the promise of hypertext – created programs for researchers, academics, and even off-the-shelf personal computers. Every research lab worth their weight in salt had a hypertext project. Together they built entirely new paradigms into their software, processes and concepts that feel wonderfully familiar today but were completely outside the realm of possibilities just a few years earlier.

At Brown University, the very place where Ted Nelson was studying when he coined the term hypertext, Norman Meyrowitz, Nancy Garrett, and Karen Catlin were the first to breathe life into the hyperlink, which was introduced in their program Intermedia. At Symbolics, Janet Walker was toying with the idea of saving links for later, a kind of speed dial for the digital world – something she was calling a bookmark. At the University of Maryland, Ben Schneiderman sought to compile and link the world’s largest source of information with his Interactive Encyclopedia System.

Dame Wendy Hall, at the University of Southhampton, sought to extend the life of the link further in her own program, Microcosm. Each link made by the user was stored in a linkbase, a database apart from the main text specifically designed to store metadata about connections. In Microcosm, links could never die, never rot away. If their connection was severed they could point elsewhere since links weren’t directly tied to text. You could even write a bit of text alongside links, expanding a bit on why the link was important, or add to a document separate layers of links, one, for instance, a tailored set of carefully curated references for experts on a given topic, the other a more laid back set of links for the casual audience.

There were mailing lists and conferences and an entire community that was small, friendly, fiercely competitive and locked in an arms race to find the next big thing. It was impossible not to get swept up in the fervor. Hypertext enabled a new way to store actual, tangible knowledge; with every innovation the digital world became more intricate and expansive and all-encompassing.

Then came the heavy hitters. Under a shroud of mystery, researchers and programmers at the legendary Xerox PARC were building NoteCards. Apple caught wind of the idea and found it so compelling that they shipped their own hypertext application called Hypercard, bundled right into the Mac operating system. If you were a late Apple II user, you likely have fond memories of Hypercard, an interface that allowed you to create a card, and quickly link it to another. Cards could be anything, a recipe maybe, or the prototype of a latest project. And, one by one, you could link those cards up, visually and with no friction, until you had a digital reflection of your ideas.

Towards the end of the 80s, it was clear that hypertext had a bright future. In just a few short years, the software had advanced in leaps and bounds.

After a brief stint studying physics at The Queen’s College, Oxford, Sir Tim Berners-Lee returned to his first love: computers. He eventually found a short-term, six-month contract at the particle physics lab Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), or simply, CERN.

CERN is responsible for a long line of particle physics breakthroughs. Most recently, they built the Large Hadron Collider, which led to the confirmation of the Higgs Boson particle, a.k.a. the “God particle.”

CERN doesn’t operate like most research labs. Its internal staff makes up only a small percentage of the people that use the lab. Any research team from around the world can come and use the CERN facilities, provided that they are able to prove their research fits within the stated goals of the institution. A majority of CERN occupants are from these research teams. CERN is a dynamic, sprawling campus of researchers, ferrying from location to location on bicycles or mine-carts, working on the secrets of the universe. Each team is expected to bring their own equipment and expertise. That includes computers.

Berners-Lee was hired to assist with software on an earlier version of the particle accelerator called the Proton Synchrotron. When he arrived, he was blown away by the amount of pure, unfiltered information that flowed through CERN. It was nearly impossible to keep track of it all and equally impossible to find what you were looking for. Berners-Lee wanted to capture that information and organize it.

His mind flashed back to that conversation with his father all those years ago. What if it were possible to create a computer program that allowed you to make random associations between bits of information? What if you could, in other words, link one thing to another? He began working on a software project on the side for himself. Years later, that would be the same way he built the web. He called this project ENQUIRE, named for a Victorian handbook he had read as a child.

Using a simple prompt, ENQUIRE users could create a block of info, something like Otlet’s index cards all those years ago. And just like the Universal Bibliography, ENQUIRE allowed you to link one block to another. Tools were bundled in to make it easier to zoom back and see the connections between the links. For Berners-Lee this filled a simple need: it replaced the part of his memory that made it impossible for him to remember names and faces with a digital tool.

Compared to the software being actively developed at the University of Southampton or at Xerox or Apple, ENQUIRE was unsophisticated. It lacked a visual interface, and its format was rudimentary. A program like Hypercard supported rich-media and advanced two-way connections. But ENQUIRE was only Berners-Lee’s first experiment with hypertext. He would drop the project when his contract was up at CERN.

Berners-Lee would go and work for himself for several years before returning to CERN. By the time he came back, there would be something much more interesting for him to experiment with. Just around the corner was the Internet.

Packet switching is the single most important invention in the history of the Internet. It is how messages are transmitted over a globally decentralized network. It was discovered almost simultaneously in the late-60s by two different computer scientists, Donald Davies and Paul Baran. Both were interested in the way in which it made networks resilient.

Traditional telecommunications at the time were managed by what is known as circuit switching. With circuit switching, a direct connection is open between the sender and receiver, and the message is sent in its entirety between the two. That connection needs to be persistent and each channel can only carry a single message at a time. That line stays open for the duration of a message and everything is run through a centralized switch. 

If you’re searching for an example of circuit switching, you don’t have to look far. That’s how telephones work (or used to, at least). If you’ve ever seen an old film (or even a TV show like Mad Men) where an operator pulls a plug out of a wall and plugs it back in to connect a telephone call, that’s circuit switching (though that was all eventually automated). Circuit switching works because everything is sent over the wire all at once and through a centralized switch. That’s what the operators are connecting.

Packet switching works differently. Messages are divided into smaller bits, or packets, and sent over the wire a little at a time. They can be sent in any order because each packet has just enough information to know where in the order it belongs. Packets are sent through until the message is complete, and then re-assembled on the other side. There are a few advantages to a packet-switched network. Multiple messages can be sent at the same time over the same connection, split up into little packets. And crucially, the network doesn’t need centralization. Each node in the network can pass around packets to any other node without a central routing system. This made it ideal in a situation that requires extreme adaptability, like in the fallout of an atomic war, Paul Baran’s original reason for devising the concept.

When Davies began shopping around his idea for packet switching to the telecommunications industry, he was shown the door. “I went along to Siemens once and talked to them, and they actually used the words, they accused me of technical — they were really saying that I was being impertinent by suggesting anything like packet switching. I can’t remember the exact words, but it amounted to that, that I was challenging the whole of their authority.” Traditional telephone companies were not at all interested in packet switching. But ARPA was.

ARPA, later known as DARPA, was a research agency embedded in the United States Department of Defense. It was created in the throes of the Cold War — a reaction to the launch of the Sputnik satellite by Russia — but without a core focus. (It was created at the same time as NASA, so launching things into space was already taken.) To adapt to their situation, ARPA recruited research teams from colleges around the country. They acted as a coordinator and mediator between several active university research projects with a military focus.

ARPA’s organization had one surprising and crucial side effect. It was comprised mostly of professors and graduate students who were working at its partner universities. The general attitude was that as long as you could prove some sort of modest relation to a military application, you could pitch your project for funding. As a result, ARPA was filled with lots of ambitious and free-thinking individuals working inside of a buttoned-up government agency, with little oversight, coming up with the craziest and most world-changing ideas they could. “We expected that a professional crew would show up eventually to take over the problems we were dealing with,” recalls Bob Kahn, an ARPA programmer critical to the invention of the Internet. The “professionals” never showed up.

One of those professors was Leonard Kleinrock at UCLA. He was involved in the first stages of ARPANET, the network that would eventually become the Internet. His job was to help implement the most controversial part of the project, the still theoretical concept known as packet switching, which enabled a decentralized and efficient design for the ARPANET network. It is likely that the Internet would not have taken shape without it. Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet. By the end of the 1980s, the Internet went commercial and global, including a node at CERN.

Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet.

The first applications of the Internet are still in use today. FTP, used for transferring files over the network, was one of the first things built. Email is another one. It had been around for a couple of decades on a closed system already. When the Internet began to spread, email became networked and infinitely more useful.

Other projects were aimed at making the Internet more accessible. They had names like Archie, Gopher, and WAIS, and have largely been forgotten. They were united by a common goal of bringing some order to the chaos of a decentralized system. WAIS and Archie did so by indexing the documents put on the Internet to make them searchable and findable by users. Gopher did so with a structured, hierarchical system. 

Kleinrock was there when the first message was ever sent over the Internet. He was supervising that part of the project, and even then, he knew what a revolutionary moment it was. However, he is quick to note that not everybody shared that feeling in the beginning. He recalls the sentiment held by the titans of the telecommunications industry like the Bell Telephone Company. “They said, ‘Little boy, go away,’ so we went away.” Most felt that the project would go nowhere, nothing more than a technological fad.

In other words, no one was paying much attention to what was going on and no one saw the Internet as much of a threat. So when that group of professors and graduate students tried to convince their higher-ups to let the whole thing be free — to let anyone implement the protocols of the Internet without a need for licenses or license fees — they didn’t get much pushback. The Internet slipped into public use and only the true technocratic dreamers of the late 20th century could have predicted what would happen next.

Berners-Lee returned to CERN in a fellowship position in 1984. It was four years after he had left. A lot had changed. CERN had developed their own network, known as CERNET, but by 1989, they arrived and hooked up to the new, internationally standard Internet. “In 1989, I thought,” he recalls, “look, it would be so much easier if everybody asking me questions all the time could just read my database, and it would be so much nicer if I could find out what these guys are doing by just jumping into a similar database of information for them.” Put another way, he wanted to share his own homepage, and get a link to everyone else’s.

What he needed was a way for researchers to share these “databases” without having to think much about how it all works. His way in with management was operating systems. CERN’s research teams all bring their own equipment, including computers, and there’s no way to guarantee they’re all running the same OS. Interoperability between operating systems is a difficult problem by design — generally speaking — the goal of an OS is to lock you in. Among its many other uses, a globally networked hypertext system like the web was a wonderful way for researchers to share notes between computers using different operating systems.

However, Berners-Lee had a bit of trouble explaining his idea. He’s never exactly been concise. By 1989, when he wrote “Information Management, a Proposal,” Berners-Lee already had worldwide ambitions. The document is thousands of words, filled with diagrams and charts. It jumps energetically from one idea to the next without fully explaining what’s just been said. Much of what would eventually become the web was included in the document, but it was just too big of an idea. It was met with a lukewarm response — that “Vague, but exciting” comment scrawled across the top.

A year later, in May of 1990, at the encouragement of his boss Mike Sendall (the author of that comment), Beners-Lee circulated the proposal again. This time it was enough to buy him a bit of time internally to work on it. He got lucky. Sendall understood his ambition and aptitude. He wouldn’t always get that kind of chance. The web needed to be marketed internally as an invaluable tool. CERN needed to need it. Taking complex ideas and boiling them down to their most salient, marketable points, however, was not Berners-Lee’s strength. For that, he was going to need a partner. He found one in Robert Cailliau.

Cailliau was a CERN veteran. By 1989, he’d worked there as a programmer for over 15 years. He’d embedded himself in the company culture, proving a useful resource helping teams organize their informational toolset and knowledge-sharing systems. He had helped several teams at CERN do exactly the kind of thing Berners-Lee was proposing, though at a smaller scale.

Temperamentally, Cailliau was about as different from Berners-Lee as you could get. He was hyper-organized and fastidious. He knew how to sell things internally, and he had made plenty of political inroads at CERN. What he shared with Berners-Lee was an almost insatiable curiosity. During his time as a nurse in the Belgian military, he got fidgety. “When there was slack at work, rather than sit in the infirmary twiddling my thumbs, I went and got myself some time on the computer there.” He ended up as a programmer in the military, working on war games and computerized models. He couldn’t help but look for the next big thing.

In the late 80s, Cailliau had a strong interest in hypertext. He was taking a look at Apple’s Hypercard as a potential internal documentation system at CERN when he caught wind of Berners-Lee’s proposal. He immediately recognized its potential.

Working alongside Berners-Lee, Cailliau pieced together a new proposal. Something more concise, more understandable, and more marketable. While Berners-Lee began putting together the technologies that would ultimately become the web, Cailliau began trying to sell the idea to interested parties inside of CERN.

The web, in all of its modern uses and ubiquity can be difficult to define as just one thing — we have the web on our refrigerators now. In the beginning, however, the web was made up of only a few essential features.

There was the web server, a computer wired to the Internet that can transmit documents and media (webpages) to other computers. Webpages are served via HTTP, a protocol designed by Berners-Lee in the earliest iterations of the web. HTTP is a layer on top of the Internet, and was designed to make things as simple, and resilient, as possible. HTTP is so simple that it forgets a request as soon as it has made it. It has no memory of the webpages its served in the past. The only thing HTTP is concerned with is the request it’s currently making. That makes it magnificently easy to use.

These webpages are sent to browsers, the software that you’re using to read this article. Browsers can read documents handed to them by server because they understand HTML, another early invention of Sir Tim Berners-Lee. HTML is a markup language, it allows programmers to give meaning to their documents so that they can be understood. The “H” in HTML stands for Hypertext. Like HTTP, HTML — all of the building blocks programmers can use to structure a document — wasn’t all that complex, especially when compared to other hypertext applications at the time. HTML comes from a long line of other, similar markup languages, but Berners-Lee expanded it to include the link, in the form of an anchor tag. The <a> tag is the most important piece of HTML because it serves the web’s greatest function: to link together information.

The hyperlink was made possible by the Universal Resource Identifier (URI) later renamed to the Uniform Resource Indicator after the IETF found the word “universal” a bit too substantial. But for Berners-Lee, that was exactly the point. “Its universality is essential: the fact that a hypertext link can point to anything, be it personal, local or global, be it draft or highly polished,” he wrote in his personal history of the web. Of all the original technologies that made up the web, Berners-Lee — and several others — have noted that the URL was the most important.

By Christmas of 1990, Sir Tim Berners-Lee had all of that built. A full prototype of the web was ready to go.

Cailliau, meanwhile, had had a bit of success trying to sell the idea to his bosses. He had hoped that his revised proposal would give him a team and some time. Instead he got six months and a single staff member, intern Nicola Pellow. Pellow was new to CERN, on placement for her mathematics degree. But her work on the Line Mode Browser, which enabled people from around the world using any operating system to browse the web, proved a crucial element in the web’s early success. Berners-Lee’s work, combined with the Line Mode Browser, became the web’s first set of tools. It was ready to show to the world.

When the team at CERN submitted a paper on the World Wide Web to the San Antonio Hypertext Conference in 1991, it was soundly rejected. They went anyway, and set up a table with a computer to demo it to conference attendees. One attendee remarked:

They have chutzpah calling that the World Wide Web!

The highlight of the web is that it was not at all sophisticated. Its use of hypertext was elementary, allowing for only simplistic text based links. And without two-way links, pretty much a given in hypertext applications, links could go dead at any minute. There was no linkbase, or sophisticated metadata assigned to links. There was just the anchor tag. The protocols that ran on top of the Internet were similarly basic. HTTP only allowed for a handful of actions, and alternatives like Gopher or WAIS offered far more options for advanced connections through the Internet network.

It was hard to explain, difficult to demo, and had overly lofty ambition. It was created by a man who didn’t have much interest in marketing his ideas. Even the name was somewhat absurd. “WWW” is one of only a handful of acronyms that actually takes longer to say than the full “World Wide Web.”

We know how this story ends. The web won. It’s used by billions of people and runs through everything we do. It is among the most remarkable technological achievements of the 20th century.

It had a few advantages, of course. It was instantly global and widely accessible thanks to the Internet. And the URL — and its uniqueness — is one of the more clever concepts to come from networked computing.

But if you want to truly understand why the web succeeded we have to come back to information. One of Berners-Lee’s deepest held beliefs is that information is incredibly powerful, and that it deserves to be free. He believed that the Web could deliver on that promise. For it to do that, the web would need to spread.

Berners-Lee looked to his successors for inspiration: the Internet. The Internet succeeded, in part, because they gave it away to everyone. After considering several licensing options, he lobbied CERN to release the web unlicensed to the general public. CERN, an organization far more interested in particle physics breakthroughs than hypertext, agreed. In 1993, the web officially entered the public domain.

And that was the turning point. They didn’t know it then, but that was the moment the web succeeded. When Berners-Lee was able to make globally available information truly free.

In an interview some years ago, Berners-Lee recalled how it was that the web came to be.

I had the idea for it. I defined how it would work. But it was actually created by people.

That may sound like humility from one of the world’s great thinkers — and it is that a little — but it is also the truth. The web was Berners-Lee’s gift to the world. He gave it to us, and we made it what it was. He and his team fought hard at CERN to make that happen.

Berners-Lee knew that with the resources available to him he would never be able to spread the web sufficiently outside of the hallways of CERN. Instead, he packaged up all the code that was needed to build a browser into a library called libwww and posted it to a Usenet group. That was enough for some people to get interested in browsers. But before browsers would be useful, you needed something to browse.

The post Chapter 1: Birth appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Cicada Principle, revisited with CSS variables

Css Tricks - Tue, 08/04/2020 - 1:13pm

Lea Verou digging up the CSS trickery classic and applying it to clip the backgrounds of some code blocks:

The main idea is simple: You write your main rule using CSS variables, and then use :nth-of-*() rules to set these variables to something different every N items. If you use enough variables, and choose your Ns for them to be prime numbers, you reach a good appearance of pseudo-randomness with relatively small Ns.

The update to the trick is that she doesn’t update the whole block of code for each of the selectors, it’s just one part of the clipping path that gets updated via a custom property.

CodePen Embed Fallback

Lea’s post has more trickery using the same concepts.

Direct Link to ArticlePermalink

The post The Cicada Principle, revisited with CSS variables appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Jetpack CRM

Css Tricks - Tue, 08/04/2020 - 4:39am

About a year ago, Automattic bought up Zero BS CRM. The thinking at the time was that it could be rebranded into the Jetpack suite and, well, that happened.

CRM meaning “Customer Relationship Management” if you’re like me and this is a little outside your sphere of everyday software. CRMs are big business though. When I lived in Silicon Valley, I would regularly drive by a very large building bearing the name of a CRM company that you’ve definitely heard of.

The name is a good one since CRMs are for helping you manage your customers. I have a brother-in-law who sells office furniture. He definitely uses a CRM. He gets leads for new clients, he helps figure out what they need, gets them quotes, and deals with billing them. It sounds straightforward, but it kinda isn’t. CRMs are complicated. Who better to bring you a good CRM than someone who brings you a good CMS?

The fact is that people are different and businesses are different, means that rather than forcing your business to behave like a CRM thinks it should, the CRM should flex to what you need. Jetpack CRM calls it a DIY CRM:

We let entrepreneurs pick and choose only what they need.

What appeals to me about Jetpack CRM is that you can leverage software that you already have and use. I swear the older I get the more cautious I become about adding major new software products to my life. There are always learning curves that are steeper than you want, gotchas you didn’t forsee, and maintenance you end up loathing. The idea of adding major new functionality under the WordPress roof feels much better to me. Everytime I do that with WordPress, I end up thinking it was the right move. I’ve done it now with both forums, newsletters, and eCommerce and each time was great.

I think it’s a smart move for Automattic. People should be able to do just about anything with their website, especially when it has the power of a login system, permission system, database, and all this infrastructure ready to go like any given WordPress site has. A CRM is a necessary thing for a lot of businesses and now Automattic has an answer to that.

Speaking of all the modularity of turning on and off features, you can also select a level that, essentially, specifies how much you want Jetpack CRM to take over your WordPress admin.

If running a CRM is the primary thing you’re doing with your WordPress site, Jetpack CRM can kinda take over and make the admin primarily that with the “CRM Only” setting. Or, just augment the admin and leave everything else as-is with the “Full” setting.

Like most things Jetpack these days, this is an ala-carte purchase if you need features beyond what the free plan offers. It’s not bundled with any other Jetpack stuff.

It’s also like WooCommerce in that there are a bunch of extensions for it that you only have to buy if you need them. For example, the $49 Gravity Forms extension allows you to build your lead forms with the Gravity Forms plugin, or it comes bundled in any of the paid plans. The $39 MailChimp plugin connects new leads to MailChimp lists.

Mike Stott over on WordPress Tavern:

“There’s a massive opportunity for CRM in the WordPress space,” said Stott. “A CRM is not like installing an SEO plugin on every website you own — generally you’d only have a single CRM for your business — but it’s the core of your business. The fact that 3,000+ users and counting are choosing WordPress to run their CRM is a great start.”

I’m a pretty heavy Jetpack user myself, using almost all its features.

Just to be clear, this isn’t some new feature of the core Jetpack plugin that’ll just show up on your site when you upgrade versions. It’s a totally separate product. In fact, the plugin folder you install to is called “zero-bs-crm,” the original name of the product.

I was able to install and play around with it right here on CSS-Tricks with no trouble at all. Being able to create invoices right here on the site is pretty appealing to me, so I’ll be playing with that some more.

The post Jetpack CRM appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Friction Logs

Css Tricks - Mon, 08/03/2020 - 1:27pm

I first heard the term “Friction Log” from Suz Hinton back in April on ShopTalk. The idea makes an extreme amount of sense: Use a thing, and write down moments where you felt friction.

Did some installation step bug out? Did you see something that the docs didn’t mention? Did you have to stop and wonder if some particular API existed or how it worked? Were you confused about why a certain thing was called what it was? Friction! Log it. Then use that log to smooth out that friction because whatever is being built will be better for it.

I mention this because my friend Rick Blalock just started frictionlog.com based on that idea. I’d say that it’s a little weird to watch content of someone else getting stuck, but I know better. People have told me time and time again that they enjoy watching me get stuck and unstuck in screencasts. This is like that, except these moments of getting stuck are moments of friction worth of calling out.

I’d love to see a moment in the industry where you pay people to do friction logs on your product. Like a form of extra-educated customer research.

Direct Link to ArticlePermalink

The post Friction Logs appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Making of: Netlify’s Million Devs SVG Animation Site

Css Tricks - Mon, 08/03/2020 - 5:04am

The following article captures the process of building the Million Developers microsite for Netlify. This project was built by a few folks and we’ve captured some parts of the process of building it here- focusing mainly on the animation aspects, in case any are helpful to others building similar experiences.

Building a Vue App out of an SVG

The beauty of SVG is you can think of it, and the coordinate system, as a big game of battleship. You’re really thinking in terms of x, y, width, and height.

<div id="app"> <app-login-result-sticky v-if="user.number" /> <app-github-corner /> <app-header /> <!-- this is one big SVG --> <svg id="timeline" xmlns="http://www.w3.org/2000/svg" :viewBox="timelineAttributes.viewBox"> <!-- this is the desktop path --> <path class="cls-1 timeline-path" transform="translate(16.1 -440.3)" d="M951.5,7107..." /> <!-- this is the path for mobile --> <app-mobilepath v-if="viewportSize === 'small'" /> <!-- all of the stations, broken down by year --> <app2016 /> <app2017 /> <app2018 /> <app2019 /> <app2020 /> <!-- the 'you are here' marker, only shown on desktop and if you're logged in --> <app-youarehere v-if="user.number && viewportSize === 'large'" /> </svg> </div>

Within the larger app component, we have the large header, but as you can see, the rest is one giant SVG. From there, we broke down the rest of the giant SVG into several components:

  • Candyland-type paths for both desktop and mobile, shown conditionally by a state in the Vuex store
  • There are 27 stations, not including their text counterparts, and many decorative components like bushes, trees, and streetlamps, which is a lot to keep track of in one component, so they’re broken down by year
  • The ‘you are here’ marker, only shown on desktop and if you’re logged in

SVG is wonderfully flexible because not only can we draw absolute and relative shapes and paths within that coordinate system, we can also draw SVGs within SVGs. We just need to defined the x, y, width and height of those SVGs and we can mount them inside the larger SVG, which is exactly what we’re going to do with all these components so that we can adjust their placement whenever needed. The <g> within the components stands for group, you can think of them a little like divs in HTML.

So here’s what this looks like within the year components:

<template> <g> <!-- decorative components --> <app-tree x="650" y="5500" /> <app-tree x="700" y="5550" /> <app-bush x="750" y="5600" /> <!-- station component --> <app-virtual x="1200" y="6000" xSmall="50" ySmall="15100" /> <!-- text component, with slots --> <app-text x="1400" y="6500" xSmall="50" ySmall="15600" num="20" url-slug="jamstack-conf-virtual" > <template v-slot:date>May 27, 2020</template> <template v-slot:event>Jamstack Conf Virtual</template> </app-text> ... </template> <script> ... export default { components: { // loading the decorative components in syncronously AppText, AppTree, AppBush, AppStreetlamp2, // loading the heavy station components in asyncronously AppBuildPlugins: () => import("@/components/AppBuildPlugins.vue"), AppMillion: () => import("@/components/AppMillion.vue"), AppVirtual: () => import("@/components/AppVirtual.vue"), }, }; ... </script>

Within these components, you can see a number of patterns:

  • We have bushes and trees for decoration that we can sprinkle around viax and y values via props
  • We can have individual station components, which also have two different positioning values, one for large and small devices
  • We have a text component, which has three available slots, one for the date, and two for two different text lines
  • We’re also loading in the decorative components synchronously, and loading those heavier SVG stations async
SVG Animation Header animation for Million Devs

The SVG animation is done with GreenSock (GSAP), with their new ScrollTrigger plugin. I wrote up a guide on how to work with GSAP for their latest 3.0 release earlier this year. If you’re unfamiliar with this library, that might be a good place to start.

Working with the plugin is thankfully straightforward, here is the base of the functionality we’ll need:

import { gsap } from "gsap"; import { ScrollTrigger } from "gsap/ScrollTrigger.js"; import { mapState } from "vuex"; gsap.registerPlugin(ScrollTrigger); export default { computed: { ...mapState([ "toggleConfig", "startConfig", "isAnimationDisabled", "viewportSize", ]), }, ... methods: { millionAnim() { let vm = this; let tl; const isScrollElConfig = { scrollTrigger: { trigger: `.million${vm.num}`, toggleActions: this.toggleConfig, start: this.startConfig, }, defaults: { duration: 1.5, ease: "sine", }, }; } }, mounted() { this.millionAnim(); }, };

First, we’re importing gsap and the package we need, as well as state from the Vuex store. I put the toggleActions and start config settings in the store and passed them into each component because while I was working, I needed to experiment with which point in the UI I wanted to trigger the animations, this kept me from having to configure each component separately.

Those configurations in the store look like this:

export default new Vuex.Store({ state: { toggleConfig: `play pause none pause`, startConfig: `center 90%`, } }

This configuration breaks down to

  • toggleConfig: play the animation when it passes down the page (another option is to say restart and it will retrigger if you see it again), it pauses when it is out of the viewport (this can slightly help with perf), and that it doesn’t retrigger in reverse when going back up the page.
  • startConfig is stating that when the center of the element is 90% down from the height of the viewport, to trigger the animation to begin.

These are the settings we decided on for this project, there are many others! You can understand all of the options with this video.

For this particular animation, we needed to treat it a little differently if it was a banner animation which didn’t need to be triggered on scroll or if it was later in the timeline. We passed in a prop and used that to pass in that config depending on the number in props:

if (vm.num === 1) { tl = gsap.timeline({ defaults: { duration: 1.5, ease: "sine", }, }); } else { tl = gsap.timeline(isScrollElConfig); }

Then, for the animation itself, I’m using what’s called a label on the timeline, you can think of it like identifying a point in time on the playhead that you may want to hang animations or functionality off of. We have to make sure we use the number prop for the label too, so we keep the timelines for the header and footer component separated.

tl.add(`million${vm.num}`) ... .from( "#front-leg-r", { duration: 0.5, rotation: 10, transformOrigin: "50% 0%", repeat: 6, yoyo: true, ease: "sine.inOut", }, `million${vm.num}` ) .from( "#front-leg-l", { duration: 0.5, rotation: 10, transformOrigin: "50% 0%", repeat: 6, yoyo: true, ease: "sine.inOut", }, `million${vm.num}+=0.25` );

There’s a lot going on in the million devs animation so I’ll just isolate one piece of movement to break down: above we have the girls swinging legs. We have both legs swinging separately, both are repeating several times, and that yoyo: true lets GSAP know that I’d like the animation to reverse every other alteration. We’re rotating the legs, but what makes it realistic is the transformOrigin starts at the center top of the leg, so that when it’s rotating, it’s rotating around the knee axis, like knees do :)

Adding an Animation Toggle

We wanted to give users the ability to explore the site without animation, should they have a vestibular disorder, so we created a toggle for the animation play state. The toggle is nothing special- it updates state in the Vuex store through a mutation, as you might expect:

export default new Vuex.Store({ state: { ... isAnimationDisabled: false, }, mutations: { updateAnimationState(state) { state.isAnimationDisabled = !state.isAnimationDisabled }, ... })

The real updates happen in the topmost App component where we collect all of the animations and triggers, and then adjust them based on the state in the store. We watch the isAnimationDisabled property for changes, and when one occurs, we grab all instances of scrolltrigger animations in the app. We don’t .kill() the animations, which one option, because if we did, we wouldn’t be able to restart them.

Instead, we either set their progress to the final frame if animations are disabled, or if we’re restarting them, we set their progress to 0 so they can restart when they are set to fire on the page. If we had used .restart() here, all of the animations would have played and we wouldn’t see them trigger as we kept going down the page. Best of both worlds!

watch: { isAnimationDisabled(newVal, oldVal) { ScrollTrigger.getAll().forEach((trigger) => { let animation = trigger.animation; if (newVal === true) { animation && animation.progress(1); } else { animation && animation.progress(0); } }); }, }, SVG Accessibility

I am by no means an accessibility expert, so please let me know if I’ve misstepped here- but I did a fair amount of research and testing on this site, and was pretty excited that when I tested on my Macbook via voiceover, the site’s pertinent information was traversable, so I’m sharing what we did to get there.

For the initial SVG that cased everything, we didn’t apply a role so that the screenreader would traverse within it. For the trees and bushes, we applied role="img" so the screenreader would skip it and any of the more detailed stations we applied a unique id and title, which was the first element within the SVG. We also applied role="presentation".

<svg ... role="presentation" aria-labelledby="analyticsuklaunch" > <title id="analyticsuklaunch">Launch of analytics</title>

I learned a lot of this from this article by Heather Migliorisi, and this great article by Leonie Watson.

The text within the SVG does announce itself as you tab through the page, and the link is found, all of the text is read. This is what that text component looks like, with those slots mentioned above.

<template> <a :href="`https://www.netlify.com/blog/2020/08/03/netlify-milestones-on-the-road-to-1-million-devs/#${urlSlug}`" > <svg xmlns="http://www.w3.org/2000/svg" width="450" height="250" :x="svgCoords.x" :y="svgCoords.y" viewBox="0 0 280 115.4" > <g :class="`textnode text${num}`"> <text class="d" transform="translate(7.6 14)"> <slot name="date">Jul 13, 2016</slot> </text> <text class="e" transform="translate(16.5 48.7)"> <slot name="event">Something here</slot> </text> <text class="e" transform="translate(16.5 70)"> <slot name="event2" /> </text> <text class="h" transform="translate(164.5 104.3)">View Milestone</text> </g> </svg> </a> </template>

Here’s a video of what this sounds like if I tab through the SVG on my Mac:

If you have further suggestions for improvement please let us know!

The repo is also open source if you want to check out the code or file a PR.

Thanks a million (pun intended) to my coworkers Zach Leatherman and Hugues Tennier who worked on this with me, their input and work was invaluable to the project, it only exists from teamwork to get it over the line! And so much respect to Alejandro Alvarez who did the design, and did a spectacular job. High fives all around. &#x1f64c;

The post The Making of: Netlify’s Million Devs SVG Animation Site appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Lightweight Masonry Solution

Css Tricks - Mon, 08/03/2020 - 4:00am

Back in May, I learned about Firefox adding masonry to CSS grid. Masonry layouts are something I’ve been wanting to do on my own from scratch for a very long time, but have never known where to start. So, naturally, I checked the demo and then I had a lightbulb moment when I understood how this new proposed CSS feature works.

Support is obviously limited to Firefox for now (and, even there, only behind a flag), but it still offered me enough of a starting point for a JavaScript implementation that would cover browsers that currently lack support.

The way Firefox implements masonry in CSS is by setting either grid-template-rows (as in the example) or grid-template-columns to a value of masonry.

My approach was to use this for supporting browsers (which, again, means just Firefox for now) and create a JavaScript fallback for the rest. Let’s look at how this works using the particular case of an image grid.

First, enable the flag

In order to do this, we go to about:config in Firefox and search for “masonry.” This brings up the layout.css.grid-template-masonry-value.enabled flag, which we enable by double clicking its value from false (the default) to true.

Making sure we can test this feature. Let’s start with some markup

The HTML structure looks something like this:

<section class="grid--masonry"> <img src="black_cat.jpg" alt="black cat" /> <!-- more such images following --> </section> Now, let’s apply some styles

The first thing we do is make the top-level element a CSS grid container. Next, we define a maximum width for our images, let’s say 10em. We also want these images to shrink to whatever space is available for the grid’s content-box if the viewport becomes too narrow to accommodate for a single 10em column grid, so the value we actually set is Min(10em, 100%). Since responsivity is important these days, we don’t bother with a fixed number of columns, but instead auto-fit as many columns of this width as we can:

$w: Min(10em, 100%); .grid--masonry { display: grid; grid-template-columns: repeat(auto-fit, $w); > * { width: $w; } }

Note that we’ve used Min() and not min() in order to avoid a Sass conflict.

CodePen Embed Fallback

Well, that’s a grid!

Not a very pretty one though, so let’s force its content to be in the middle horizontally, then add a grid-gap and padding that are both equal to a spacing value ($s). We also set a background to make it easier on the eyes.

$s: .5em; /* masonry grid styles */ .grid--masonry { /* same styles as before */ justify-content: center; grid-gap: $s; padding: $s } /* prettifying styles */ html { background: #555 } CodePen Embed Fallback

Having prettified the grid a bit, we turn to doing the same for the grid items, which are the images. Let’s apply a filter so they all look a bit more uniform, while giving a little additional flair with slightly rounded corners and a box-shadow.

img { border-radius: 4px; box-shadow: 2px 2px 5px rgba(#000, .7); filter: sepia(1); } CodePen Embed Fallback

The only thing we need to do now for browsers that support masonry is to declare it:

.grid--masonry { /* same styles as before */ grid-template-rows: masonry; }

While this won’t work in most browsers, it produces the desired result in Firefox with the flag enabled as explained earlier.

grid-template-rows: masonry working in Firefox with the flag enabled (Demo).

But what about the other browsers? That’s where we need a…

JavaScript fallback

In order to be economical with the JavaScript the browser has to run, we first check if there are any .grid--masonry elements on that page and whether the browser has understood and applied the masonry value for grid-template-rows. Note that this is a generic approach that assumes we may have multiple such grids on a page.

let grids = [...document.querySelectorAll('.grid--masonry')]; if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { console.log('boo, masonry not supported &#x1f62d;') } else console.log('yay, do nothing!') Support test (live).

If the new masonry feature is not supported, we then get the row-gap and the grid items for every masonry grid, then set a number of columns (which is initially 0 for each grid).

let grids = [...document.querySelectorAll('.grid--masonry')]; if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { grids = grids.map(grid => ({ _el: grid, gap: parseFloat(getComputedStyle(grid).gridRowGap), items: [...grid.childNodes].filter(c => c.nodeType === 1), ncol: 0 })); grids.forEach(grid => console.log(`grid items: ${grid.items.length}; grid gap: ${grid.gap}px`)) }

Note that we need to make sure the child nodes are element nodes (which means they have a nodeType of 1). Otherwise, we can end up with text nodes consisting of carriage returns in the array of items.

Checking we got the correct number of items and gap (live).

Before proceeding further, we have to ensure the page has loaded and the elements aren’t still moving around. Once we’ve handled that, we take each grid and read its current number of columns. If this is different from the value we already have, then we update the old value and rearrange the grid items.

if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { grids = grids.map(/* same as before */); function layout() { grids.forEach(grid => { /* get the post-resize/ load number of columns */ let ncol = getComputedStyle(grid._el).gridTemplateColumns.split(' ').length; if(grid.ncol !== ncol) { grid.ncol = ncol; console.log('rearrange grid items') } }); } addEventListener('load', e => { layout(); /* initial load */ addEventListener('resize', layout, false) }, false); }

Note that calling the layout() function is something we need to do both on the initial load and on resize.

When we need to rearrange grid items (live).

To rearrange the grid items, the first step is to remove the top margin on all of them (this may have been set to a non-zero value to achieve the masonry effect before the current resize).

If the viewport is narrow enough that we only have one column, we’re done!

Otherwise, we skip the first ncol items and we loop through the rest. For each item considered, we compute the position of the bottom edge of the item above and the current position of its top edge. This allows us to compute how much we need to move it vertically such that its top edge is one grid gap below the bottom edge of the item above.

/* if the number of columns has changed */ if(grid.ncol !== ncol) { /* update number of columns */ grid.ncol = ncol; /* revert to initial positioning, no margin */ grid.items.forEach(c => c.style.removeProperty('margin-top')); /* if we have more than one column */ if(grid.ncol > 1) { grid.items.slice(ncol).forEach((c, i) => { let prev_fin = grid.items[i].getBoundingClientRect().bottom /* bottom edge of item above */, curr_ini = c.getBoundingClientRect().top /* top edge of current item */; c.style.marginTop = `${prev_fin + grid.gap - curr_ini}px` }) } }

We now have a working, cross-browser solution!

CodePen Embed Fallback A couple of minor improvements A more realistic structure

In a real world scenario, we’re more likely to have each image wrapped in a link to its full size so that the big image opens in a lightbox (or we navigate to it as a fallback).

<section class='grid--masonry'> <a href='black_cat_large.jpg'> <img src='black_cat_small.jpg' alt='black cat'/> </a> <!-- and so on, more thumbnails following the first --> </section>

This means we also need to alter the CSS a bit. While we don’t need to explicitly set a width on the grid items anymore — as they’re now links — we do need to set align-self: start on them because, unlike images, they stretch to cover the entire row height by default, which will throw off our algorithm.

.grid--masonry > * { align-self: start; } img { display: block; /* avoid weird extra space at the bottom */ width: 100%; /* same styles as before */ } CodePen Embed Fallback Making the first element stretch across the grid

We can also make the first item stretch horizontally across the entire grid (which means we should probably also limit its height and make sure the image doesn’t overflow or get distorted):

.grid--masonry > :first-child { grid-column: 1/ -1; max-height: 29vh; } img { max-height: inherit; object-fit: cover; /* same styles as before */ }

We also need to exclude this stretched item by adding another filter criterion when we get the list of grid items:

grids = grids.map(grid => ({ _el: grid, gap: parseFloat(getComputedStyle(grid).gridRowGap), items: [...grid.childNodes].filter(c => c.nodeType === 1 && +getComputedStyle(c).gridColumnEnd !== -1 ), ncol: 0 })); CodePen Embed Fallback Handling grid items with variable aspect ratios

Let’s say we want to use this solution for something like a blog. We keep the exact same JS and almost the exact same masonry-specific CSS – we only change the maximum width a column may have and drop the max-height restriction for the first item.

As it can be seen from the demo below, our solution also works perfectly in this case where we have a grid of blog posts:

CodePen Embed Fallback

You can also resize the viewport to see how it behaves in this case.

However, if we want the width of the columns to be somewhat flexible, for example, something like this:

$w: minmax(Min(20em, 100%), 1fr)

Then we have a problem on resize:

The changing width of the grid items combined with the fact that the text content is different for each means that when a certain threshold is crossed, we may get a different number of text lines for a grid item (thus changing the height), but not for the others. And if the number of columns doesn’t change, then the vertical offsets don’t get recomputed and we end up with either overlaps or bigger gaps.

In order to fix this, we need to also recompute the offsets whenever at least one item’s height changes for the current grid. This means we need to also need to test if more than zero items of the current grid have changed their height. And then we need to reset this value at the end of the if block so that we don’t rearrange the items needlessly next time around.

if(grid.ncol !== ncol || grid.mod) { /* same as before */ grid.mod = 0 }

Alright, but how do we change this grid.mod value? My first idea was to use a ResizeObserver:

if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { let o = new ResizeObserver(entries => { entries.forEach(entry => { grids.find(grid => grid._el === entry.target.parentElement).mod = 1 }); }); /* same as before */ addEventListener('load', e => { /* same as before */ grids.forEach(grid => { grid.items.forEach(c => o.observe(c)) }) }, false) }

This does the job of rearranging the grid items when necessary even if the number of grid columns doesn’t change. But it also makes even having that if condition pointless!

This is because it changes grid.mod to 1 whenever the height or the width of at least one item changes. The height of an item changes due to the text reflow, caused by the width changing. But the change in width happens every time we resize the viewport and doesn’t necessarily trigger a change in height.

This is why I eventually decided on storing the previous item heights and checking whether they have changed on resize to determine whether grid.mod remains 0 or not:

function layout() { grids.forEach(grid => { grid.items.forEach(c => { let new_h = c.getBoundingClientRect().height; if(new_h !== +c.dataset.h) { c.dataset.h = new_h; grid.mod++ } }); /* same as before */ }) } CodePen Embed Fallback

That’s it! We now have a nice lightweight solution. The minified JavaScript is under 800 bytes, while the strictly masonry-related styles are under 300 bytes.

But, but, but… What about browser support?

Well, @supports just so happens to have better browser support than any of the newer CSS features used here, so we can put the nice stuff inside it and have a basic, non-masonry grid for non-supporting browsers. This version works all the way back to IE9.

The result in Internet Explorer

It may not look the same, but it looks decent and it’s perfectly functional. Supporting a browser doesn’t mean replicating all the visual candy for it. It means the page works and doesn’t look broken or horrible.

What about the no JavaScript case?

Well, we can apply the fancy styles only if the root element has a js class which we add via JavaScript! Otherwise, we get a basic grid where all the items have the same size.

The no JavaScript result (Demo).

The post A Lightweight Masonry Solution appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

10 modern layouts in 1 line of CSS

Css Tricks - Fri, 07/31/2020 - 8:09am

Una doing an amazing job of showing just how (dare I say it?) easy CSS layout has gotten. There is plenty to learn, but what you learn makes sense, and once you have, it’s quite empowering.

The demos are all together here.

Direct Link to ArticlePermalink

The post 10 modern layouts in 1 line of CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Look at What’s New in Chrome DevTools in 2020

Css Tricks - Fri, 07/31/2020 - 4:00am

I’m excited to share some of the newer features in Chrome DevTools with you. There’s a brief introduction below, and then we’ll cover many of the new DevTools features. We’ll also look at what’s happening in some other browsers. I keep up with this stuff, as I create Dev Tips, the largest collection of DevTools tips you’ll find online! 

It’s a good idea to find out what’s changed in DevTools because it’s constantly evolving and new features are specifically designed to help and improve our development and debugging experience.

Let’s jump into the latest and greatest. While the public stable version of Chrome does have most of these features, I’m using Chrome Canary as I like to stay on the bleeding edge.

Lighthouse

Lighthouse is an open source tool for auditing web pages, typically around performance, SEO, accessibility and such. For a while now, Lighthouse has been bundled as part of DevTools meaning you can find it in a panel named… Lighthouse!

Well done, Mr. Coyier. &#x1f3c6;

I really like Lighthouse because it’s one of easiest parts of DevTools to use. Click “Generate report” and you immediately get human-readable notes for your webpage, such as:

Document uses legible font sizes 100% legible text

Or:

Avoid an excessive DOM size (1,189 elements)

Almost every single audit links to developer documentation that explains how the audit may fail, and what you can do to improve it.

The best way to get started with Lighthouse is to run audits on your own websites:

  1. Open up DevTools and navigate to the Lighthouse panel when you are on one of your sites
  2. Select the items you want to audit (Best practices is a good starting point)
  3. Click Generate report
  4. Click on any passed/failed audits to investigate the findings

Even though Lighthouse has been part of DevTools for a while now (since 2017!), it still deserves a significant mention because of the user-facing features it continues to ship, such as:

  • An audit that checks that anchor elements resolve to their URLs (Fun fact: I worked on this!)
  • An audit that checks whether the Largest Contentful Paint metic is fast enough
  • An audit to warn you of unused JavaScript
A better “Inspect Element”

This is a subtle and, in some ways, very small feature, but it can have profound effects on how we treat web accessibility.

Here’s how it works. When you use Inspect Element — what is arguably the most common use of DevTools — you now get a tooltip with additional information on accessibility.

Accessibility is baked right in!

The reason I say this can have a profound impact is because DevTools has had accessibility features for quite some time now, but how many of us actually use them? Including this information on a commonly used feature like Inspect Element will gives it a lot more visibility and makes it a lot more accessible.

The tooltip includes:

  • the contrast ratio of the text (how well, or how poorly, does the foreground text contrast with the background color)
  • the text representation
  • the ARIA role
  • whether or not the inspected element is keyboard-focusable

To try this out, right-click (or Cmd + Shift + C) on an element and select Inspect to view it in DevTools.

I made a 14-minute video on Accessibility debugging with Chrome DevTools which covers some of this in more detail.

Emulate vision deficiencies

Exactly as it says on the tin, you can use Chrome DevTools to emulate vision impairments. For example, we can view a site through the lens of blurred vision.

That’s a challenge to read!

How can you do this in DevTools? Like this:

  1. Open DevTools (right click and “Inspect” or Cmd + Shift + C).
  2. Open the DevTools Command menu (Cmd + Shift + P on Mac, Ctrl + Shift + P on Windows).
  3. Select Show Rendering in the Command menu.
  4. Select a deficiency in the Rendering pane.

We used blurred vision as an example, but DevTools has other options, including: protanopia, deuteranopia, tritanopia, and achromatopsia.

Like with any tool of this nature, it’s designed to be a complement to our (hopefully) existing accessibility skills. In other words, it’s not instructional, but rather, influential on the designs and user experiences we create.

Here are a couple of extra resources on low vision accessibility and emulation:

Get timing on performance

The Performance Panel in DevTools can sometimes look like a confusing mish-mash of shapes and colors.

This update to it is great because it does a better job surfacing meaningful performance metrics.

What we want to look at are those extra timing rectangles shown in the “Timings” in the Performance Panel recording. This highlights:

  • DOMContentLoaded: The event which triggers when the initial HTML loads
  • First Paint: When the browser first paints pixels to the screen
  • First Contentful Paint: The point at which the browser draws content from the DOM which indicates to the user that content is loading
  • Onload: When the page and all of its resources have finished loading
  • Largest Contentful Paint: The largest image or text element, which is rendered in the viewport

As a bonus, if you find the Largest Contentful Paint event in a Performance Panel recording, you can click on it to get additional information.

Nice work, CSS-Tricks! The Largest Contentful Paint happens early on in the page load.

While there is a lot of golden information here, the “Related Node” is potentially the most useful item because it specifies exactly which element contributed to the LCP event.

To try this feature out:

  1. Open up DevTools and navigate to the Performance panel
  2. Click “Start profiling and reload page”
  3. Observe the timing metrics in the Timings section of a recording
  4. Click the individual metrics to see what additional information you get
Monitor performance

If you want to quickly get started using DevTools to analyze performance and you’ve already tried Lighthouse, then I recommend the Performance Monitor feature. This is sort of like having WebPageTest.org right at your fingertips with things like CPU usage.

Here’s how to access it:

  1. Open DevTools
  2. Open up the Command menu (Cmd + Shift + P on Mac, Ctrl + Shift + P on Windows)
  3. Select “Show performance monitor” from the Command menu
  4. Interact and navigate around the website
  5. Observe the results

The Performance Monitor can give you interesting metrics, however, unlike Lighthouse, it’s for you to figure out how to interpret them and take action. No suggestions are provided. It’s up to you to study that CPU usage chart and ask whether something like 90% is an acceptable level for your site (it probably isn’t).

The Performance Monitor has an interactive legend, where you can toggle metrics on and off, such as:

  • CPU usage
  • JS heap size
  • DOM Nodes
  • JS event listeners
  • Documents
  • Document Frames
  • Layouts / sec
  • Style recalcs / sec 
CSS overview and local overrides

CSS-Tricks has already covered these features, so go and check them out!

  • CSS Overview: A handy DevTools panel that gives a bunch of interesting stats on the CSS your page is using
  • Local Overrides:  A powerful feature that lets you override production websites with your local resources, so you can easily preview changes 
So, what about DevTool in other browsers?

I’m sure you noticed that I’ve been using Chrome throughout this article. It’s the browser I use personally. That said, it’s worth considering that:

  • Firefox DevTools is looking pretty great right now
  • With Microsoft Edge extending from Chromium, it too will benefit from these DevTools features
  • As evident on the Safari Technology Preview Release Notes (search for Web Inspector on that page), Safari DevTools has come a long way 

In other words, keep an eye out because this is a quickly evolving space!

Conclusion

We covered a lot in a short amount of space!

  • Lighthouse: A panel that provides  tips and suggestions for performance, accessibility, SEO and best practices.
  • Inspect Element: An enhancement to the Inspect Element feature that provides accessibility information to the Inspect Element tooltip
  • Emulate vision deficiencies: A feature in the Rendering Pane to view a page through the lens of low vision.
  • Performance Panel Timings: Additional metrics in the Performance panel recording, showing user-orientated stats, like Largest Contentful Paint
  • Performance Monitor – A real-time visualization of performance metrics for the current website, such as CPU usage and DOM size

Please check out my mailing list, Dev Tips, if you want to stay keep up with the latest updates and get over 200 web development tips! I also have a premium video course over at ModernDevTools.com. And, I tend to post loads of bonus web development resources on Twitter.

The post A Look at What’s New in Chrome DevTools in 2020 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Inventing Posters

Typography - Thu, 07/30/2020 - 11:38pm

The modern poster first appeared in France in the 19th century, but its antecedents can be found in Renaissance printmaking. Woodcut, engraving, etching, and drypoint were techniques used by the likes of Albrecht Dürer, Hieronymus Bosch, and Raphael, while printmaking publishers, like Hieronymus Cock, helped popularize the standalone art print and turn it into a thriving industry.

The post Inventing Posters appeared first on I Love Typography.

Spotting a Trend

Css Tricks - Thu, 07/30/2020 - 12:49pm

There are tons of smokin’ hot websites out there, with an equal or greater number of talented designers and developers who make them. The web is awesome like that and encourages that sort of creativity.

Even so, it amazes me that certain traits find their way into things. I mean, it makes sense. Many of us use the same UI frameworks and take cues from sites we admire. But every once in a while, my eye starts catching wind of the zeitgeist and commonalities that come with it.

The latest one? Blobby shapes. It’s a fun flourish that adds a little panache, especially for flat designs that need a splash of color or an interesting focal point that leads the eye from one place to anther. I’m sure you’ve seen it. I spent one week collecting screenshots of websites I came across that use it. I certainly wan’t looking for examples; they just sort of popped up in my normal browsing.

I’m sorry if it seems like I’m calling out people because that’s not my intention. I actually love the concept — so much, in fact, that I’m considering it on a project! Some of the examples in that gallery are flat-out gorgeous.

After spotting these blobby shapes a number of times, I’ve started to notice some considerations to take into account when use them. Things like:

  • Watch for contrast when text sits on top of a blob. There are plenty of cases where the document background is white and the blob is dark. If text runs through them, it’s going to be tough to find a font color that satisfies WCAG’s 2.1 AA standard for legibility.
  • Tread lightly when mixing and matching colors. One hot pink blob behind a card component ain’t a big deal, but throw in yellow, orange, and other bright colors that sit alongside it… the design starts to distract from the content. Plus, a vibrant rainbow of blobby shapes can raise accessibility concerns. A flourish is just that: a nice touch that’s subtle but impactful.
  • Blobs are good for more than color. Some of the most interesting instances I’ve seen cut images into interesting shapes. It’s cool that we can embed an image directly in SVG and then mask it with a path.
  • Blobs are also good for more than backgrounds. Did you catch that screenshot from Topcoder’s site? They’re using it for tabs which is super funky and cool.

All of this has me thinking about how the websites of today will be looked at by the developers of tomorrow. Remember way back, like 15 years ago, when many sites adopted Apple’s use of reflective imagery? I probably still have some Photoshop muscle memory from replicating that effect so many times.

Notice the skeuomorphic icons — that was popular too!

Skeuomorphism, bevels, animated GIF backgrounds, long shadows, heroes, gradients, bokeh backgrounds… all of these and many other visual treatments have had their day in the sun. Perhaps blobs will join that club at some point. Perhaps they’ll come back in style after that. Who knows! I just find it interesting to reflect on the things that have inspired us over the last three decades and imagine how the things we do today will be seen through the long lens of time.

It’d be awesome to see other instances of blobby shapes — share ’em if you’ve got ’em!

The post Spotting a Trend appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

SVG Title vs. HTML Title Attribute

Css Tricks - Thu, 07/30/2020 - 7:40am

You know the title attribute? I can do this:

<div title="The Title"> I'm a div with a `title` </div>

And now if I’m on a device with a mouse pointer and hover the cursor over that element, I get…

Which, uh, I guess is something. I sometimes use it for things like putting an expanded date or time on an element that uses shorthand for it. It’s a tiny bit of UX helpfulness reserved exclusively for sighted mouse users.

But it’s not particularly useful, as I understand it. Ire Aderinokun dug into how it’s used for the <abbr> element (a commonly cited example) and found that it’s not-so-great alone. She suggests a JavaScript-enhanced pattern. She also mentions that JAWS has a setting for announcing titles in there, so that’s interesting (although it sounds like it’s off by default).

I honestly just don’t know how useful title is for screen readers, but it’s certainly going to be nuanced.

I did just learn something about titles though… this doesn’t work:

<!-- Incorrect usage --> <svg title="Checkout"> </svg>

If you hover over that element, you won’t get a title display. You have to do it like this:

<!-- Correct usage --> <svg> <title id="unique-id">Checkout</title> <!-- More detail --> <desc>A shopping cart icon with baguettes and broccoli in the cart.</desc> </svg>

Which, interestingly, Firefox 79 just started supporting.

When you use title like that, the hoverable area to reveal the title popup is the entire rectangle of the <svg>.

I was looking at all this because I got an interesting email from someone who was in a situation where the title popup only seemed to come up when hovering over the “filled in” pixels of an SVG, and not where transparent pixels were. Weird, I thought. I couldn’t replicate in my testing either.

Turns out there is a situation like this. You can apply a <title> within a <use> element, then the title only applies to those pixels that come in via the <use>.

CodePen Embed Fallback

If you remove the “white part” title, you’ll see the “black part” only comes up over the black pixels. Seems to be consistent across browsers. Just something to watch out for if that’s how you apply titles.

The post SVG Title vs. HTML Title Attribute appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Getting the Most Out of Variable Fonts on Google Fonts

Css Tricks - Thu, 07/30/2020 - 4:24am

I have spent the past several years working (alongside a bunch of super talented people) on a font family called Recursive Sans & Mono, and it just launched officially on Google Fonts!

Wanna try it out super fast? Here’s the embed code to use the full Recursive variable font family from Google Fonts (but you will get a lot more flexibility & performance if you read further!)

<link href="https://fonts.googleapis.com/css2?family=Recursive:slnt,wght,CASL,CRSV,MONO@-15..0,300..1000,0..1,0..1,0..1&display=swap" rel="stylesheet"> Recursive is made for code, websites, apps, and more. Recursive Mono has both Linear and Casual styles for different “voices” in code, along with cursive italics if you want them — plus a wider weight range for monospaced display typography. Recursive Sans is proportional, but unlike most proportional fonts, letters maintain the same width across styles for more flexibility in UI interactions and layout.

I started Recursive as a thesis project for a type design masters program at KABK TypeMedia, and when I launched my type foundry, Arrow Type, I was subsequently commissioned by Google Fonts to finish and release Recursive as an open-source, OFL font.

You can see Recursive and learn more about it what it can do at recursive.design

Recursive is made to be a flexible type family for both websites and code, where its main purpose is to give developers and designers some fun & useful type to play with, combining fresh aesthetics with the latest in font tech.

First, a necessary definition: variable fonts are font files that fit a range of styles inside one file, usually in a way that allows the font user to select a style from a fluid range of styles. These stylistic ranges are called variable axes, and can be parameters, like font weight, font width, optical size, font slant, or more creative things. In the case of Recursive, you can control the “Monospacedness” (from Mono to Sans) and “Casualness” (between a normal, linear style and a brushy, casual style). Each type family may have one or more of its own axes and, like many features of type, variable axes are another design consideration for font designers.

You may have seen that Google Fonts has started adding variable fonts to its vast collection. You may have read about some of the awesome things variable fonts can do. But, you may not realize that many of the variable fonts coming to Google Fonts (including Recursive) have a lot more stylistic range than you can get from the default Google Fonts front end.

Because Google Fonts has a huge range of users — many of them new to web development — it is understandable that they’re keeping things simple by only showing the “weight” axis for variable fonts. But, for fonts like Recursive, this simplification actually leaves out a bunch of options. On the Recursive page, Google Fonts shows visitors eight styles, plus one axis. However, Recursive actually has 64 preset styles (also called named instances), and a total of five variable axes you can adjust (which account for a slew of more potential custom styles).

Recursive can be divided into what I think of as one of four “subfamilies.” The part shown by Google Fonts is the simplest, proportional (sans-serif) version. The four Recursive subfamilies each have a range of weights, plus Italics, and can be categorized as:

  • Sans Linear: A proportional, “normal”-looking sans-serif font. This is what gets shown on the Google Fonts website.
  • Sans Casual: A proportional “brush casual” font
  • Mono Linear: A monospace “normal” font
  • Mono Casual: A monospace “brush casual” font

This is probably better to visualize than to describe in words. Here are two tables (one for Sans, the other for Mono) showing the 64 named instances:

But again, the main Google Fonts interface only provides access to eight of those styles, plus the Weight axis:

Recursive has 64 preset styles — and many more using when using custom axis settings — but Google Fonts only shows eight of the preset styles, and just the Weight axis of the five available variable axes.

Not many variable fonts today have more than a Weight axis, so this is an understandable UX choice in some sense. Still, I hope they add a little more flexibility in the future. As a font designer & type fan, seeing the current weight-only approach feels more like an artificial flattening than true simplification — sort of like if Google Maps were to “simplify” maps by excluding every road that wasn’t a highway.

Luckily, you can still access the full potential of variable fonts hosted by Google Fonts: meet the Google Fonts CSS API, version 2. Let’s take a look at how you can use this to get more out of Recursive.

But first, it is helpful to know a few things about how variable fonts work.

How variable fonts work, and why it matters

If you’ve ever worked with photos on the web then you know that you should never serve a 9,000-pixel JPEG file when a smaller version will do. Usually, you can shrink a photo down using compression to save bandwidth when users download it.

There are similar considerations for font files. You can often reduce the size of a font dramatically by subsetting the characters included in it (a bit like cropping pixels to just leave the area you need). You can further compress the file by converting it into a WOFF2 file (which is a bit like running a raster image file though an optimizer tool like imageOptim). Vendors that host fonts, like Google Fonts, will often do these things for you automatically.

Now, think of a video file. If you only need to show the first 10 seconds of a 60-second video, you could trim the last 50 seconds to have a much small video file. 

Variable fonts are a bit like video files: they have one or more ranges of information (variable axes), and those can often either be trimmed down or “pinned” to a certain location, which helps to reduce file size. 

Of course, variable fonts are nothing like video files. Fonts record each letter shape in vectors, (similar to how SVGs store shape information). Variable fonts have multiple “source locations” which are like keyframes in an animation. To go between styles, the control points that make up letters are mathematically interpolated between their different source locations (also called deltas). A font may have many sets of deltas (at least one per variable axis, but sometimes more). To trim a variable font, then, you must trim out unneeded deltas.

As a specific example, the Casual axis in Recursive takes letterforms from “Linear” to “Casual” by interpolating vector control points between two extremes: basically, a normal drawing and a brushy drawing. The ampersand glyph animation below shows the mechanics in action: control points draw rounded corners at one extreme and shift to squared corners on the other end.

Generally, each added axis doubles the number of drawings that must exist to make a variable font work. Sometimes the number is more or less – Recursive’s Weight axis requires 3 locations (tripling the number of drawings), while its Cursive axis doesn’t require extra locations at all, but actually just activates different alternate glyphs that already exist at each location. But, the general math is: if you can cut unneeded axes from a variable font, you will usually get a smaller file.

When using the Google Fonts API, you are actually opting in to each axis. This way, instead of starting with a big file and whittling it down, you get to pick and choose the parts you want.

Variable axis tags

If you’re going to use the Google Fonts API, you first need to know how to label axes. Every axis has both a full name and an abbreviation.

These axis abbreviations are in the form of four-letter “tags.” These are lowercase for axes defined by Microsoft and recorded in the OpenType spec, and uppercase for newer axes invented or defined by others (these are also called “custom” or “private” axes). There are efforts to standardize some of these new axes.

There are currently five standard axes a font can include: 

  • wght – Weight, to control lightness and boldness
  • wdth – Width, to control overall letter width
  • opsz – Optical Size, to control adjustments to design for better readability at various sizes
  • ital – Italic, generally to switch between separate upright/italic designs
  • slnt – Slant, generally to control upright-to-slanted designs with intermediate values available

Custom axes can be almost anything. Recursive includes three of them — Monospace (MONO), Casual (CASL), and Cursive (CRSV)  — plus two standard axes, wght and slnt.

Google Fonts API basics

When you configure a font embed from the Google Fonts interface, it gives you a bit of HTML or CSS which includes a URL, and this ultimately calls in a CSS document that includes one or more @font-face rules. This includes things like font names as well as links to font files on Google servers.

This URL is actually a way of calling the Google Fonts API, and has a lot more power than you might realize. It has a few basic parts: 

  1. The main URL, specifying the API (https://fonts.googleapis.com/css2)
  2. Details about the fonts you are requesting in one or more family parameters
  3. A font-display property for setting a display parameter

As an example, let’s say we want the regular weight of Recursive (in the Sans Linear subfamily). Here’s the URL we would use with our CSS @import:

@import url('https://fonts.googleapis.com/css2?family=Recursive&display=swap');

Or we can link it up in the <head> of our HTML:

<link href="https://fonts.googleapis.com/css2?family=Recursive&display=swap" rel="stylesheet">

Once that’s in place, we can start applying the font in CSS:

body {   font-family: 'Recursive', sans-serif; }

There is a default value for each axis:

  • MONO 0 (Sans/proportional)
  • CASL 0 (Linear/normal)
  • wght 400 (Regular)
  • slnt 0 (upright)
  • CRSV 0 (Roman/non-cursive lowercase)
Choose your adventure: styles or axes

The Google Fonts API gives you two ways to request portions of variable fonts:

  1. Listing axes and the specific non-default values you want from them
  2. Listing axes and the ranges you want from them
Getting specific font styles

Font styles are requested by adding parameters to the Google Fonts URL. To keep the defaults on all axes but use get a Casual style, you could make the query Recursive:CASL@1 (this will serve Recursive Sans Casual Regular). To make that Recursive Mono Casual Regular, specify two axes before the @ and then their respective values (but remember, custom axes have uppercase tags):

https://fonts.googleapis.com/css2?family=Recursive:CASL,MONO@1,1&display=swap

To request both Regular and Bold, you would simply update the family call to Recursive:wght@400;700, adding the wght axis and specific values on it:

https://fonts.googleapis.com/css2?family=Recursive:wght@400;700&display=swap

A very helpful thing about Google Fonts is that you can request a bunch of individual styles from the API, and wherever possible, it will actually serve variable fonts that cover all of those requested styles, rather than separate font files for each style. This is true even when you are requesting specific locations, rather than variable axis ranges — if they can serve a smaller font file for your API request, they probably will.

As variable fonts can be trimmed more flexibility and efficiently in the future, the files served for given API requests will likely get smarter over time. So, for production sites, it may be best to request exactly the styles you need.

Where it gets interesting, however, is that you can also request variable axes. That allows you to retain a lot of design flexibility without changing your font requests every time you want to use a new style.

Getting a full variable font with the Google Fonts API

The Google Fonts API seeks to make fonts smaller by having users opt into only the styles and axes they want. But, to get the full benefits of variable fonts (more design flexibility in fewer files), you should use one or more axes. So, instead of requesting single styles with Recursive:wght@400;700, you can instead request that full range with Recursive:wght@400..700 (changing from the ; to .. to indicate a range), or even extending to the full Recursive weight range with Recursive:wght@300..1000 (which adds very little file size, but a whole lot of extra design oomph).

You can add additional axes by listing them alphabetically (with lowercase standard axes first, then uppercase custom axes) before the @, then specifying their values or ranges after that in the same order. For instance, to add the MONO axis and the wght axis, you could use Recursive:wght,MONO@300..1000,0..1 as the font query.

Or, to get the full variable font, you could use the following URL:

https://fonts.googleapis.com/css2?family=Recursive:slnt,wght,CASL,CRSV,MONO@-15..0,300..1000,0..1,0..1,0..1&display=swap

Of course, you still need to put that into an HTML link, like this:

<link href="https://fonts.googleapis.com/css2?family=Recursive:slnt,wght,CASL,CRSV,MONO@-15..0,300..1000,0..1,0..1,0..1&display=swap" rel="stylesheet"> Customizing it further to balance flexibility and filesize 

While it can be tempting to use every single axis of a variable font, it’s worth remembering that each additional axis adds to the overall files ize. So, if you really don’t expect to use an axis, it makes sense to leave it off. You can always add it later.

Let’s say you want Recursive’s Mono Casual styles in a range of weights,. You could use Recursive:wght,CASL,MONO@300..1000,1,1 like this:

<link href="https://fonts.googleapis.com/css2?family=Recursive:CASL,MONO,wght@1,1,300..1000&display=swap" rel="stylesheet">

You can, of course, add multiple font families to an API call with additional family parameters. Just be sure that the fonts are alphabetized by family name.

<link href="https://fonts.googleapis.com/css2?family=Inter:slnt,wght@-10..0,100..900?family=Recursive:CASL,MONO,wght@1,1,300..1000&display=swap" rel="stylesheet"> Using variable fonts

Standard axes can all be controlled with existing CSS properties. For instance, if you have a variable font with a weight range, you can specify a specific weight with font-weight: 425;. All axes can be controlled with font-variation-settings. So, if you want a Mono Casual very-heavy style of Recursive (assuming you have called the full family as shown above), you could use the following CSS:

body {  font-weight: 950;  font-variation-settings: 'MONO' 1, 'CASL' 1; }

Something good to know: font-variation-settings is much nicer to use along with CSS custom properties

You can read more specifics about designing with variable fonts at VariableFonts.io and in the excellent collection of CSS-Tricks articles on variable fonts.

Nerdy notes on the performance of variable fonts

If you were to use all 64 preset styles of Recursive as separate WOFF2 files (with their full, non-subset character set), it would be total of about 6.4 MB. By contrast, you could have that much stylistic range (and everything in between) at just 537 KB in a variable font. Of course, that is a slightly absurd comparison — you would almost never actually use 64 styles on a single web page, especially not with their full character sets (and if you do, you should use subsets with unicode-range).

A better comparison is Recursive with one axis range versus styles within that axis range. In my testing, a Recursive WOFF2 file that’s subset to the “Google Fonts Latin Basic” character set (including only characters to cover English and Western European languages), including the full 300–1000 Weight range (and all other axes “pinned” to their default values) is 60 KB. Meanwhile, a single style with the same subset is 25 KB. So, if you use just three weights of Recursive, you can save about 15 KB by using the variable font instead of individual files.

The full variable font as a subset WOFF2 clocks in at 281 KB which is kind of a lot for a font, but not so much if you compare it to the weight of a big JPEG image. So, if you assume that individual styles are about 25 KB, if you plan to use more than 11 styles, you would be better off using the variable font.

This kind of math is mostly an academic exercise for two big reasons:

  1. Variable fonts aren’t just about file size.The much bigger advantage is that they allow you to just design, picking the exact font weights (or other styles) that you want. Is a font looking a little too light? Bump up the font-weight a bit, say from 400 to 425!
  2. More importantly (as explained earlier), if you request variable font styles or axes from Google Fonts, they take care of the heavy lifting for you, sending whatever fonts they deem the most performant and useful based on your API request and the browsers visitors access your site from.

So, you don’t really need to go downloading fonts that the Google Fonts API returns to compare their file sizes. Still, it’s worth understanding the general tradeoffs so you can best decide when to opt into the variable axes and when to limit yourself to a couple of styles.

What’s next?

Fire up CodePen and give the API a try! For CodePen, you will probably want to use the CSS @import syntax, like this in the CSS panel:

@import url('https://fonts.googleapis.com/css2?family=Recursive:CASL,CRSV,MONO,slnt,wght@0..1,0..1,0..1,-15..0,300..1000&display=swap');

It is apparently better to use the HTML link syntax to avoid blocking parallel downloads of other resources. In CodePen, you’d crack open the Pen settings, select HTML, then drop the <link> in the HTML head settings.

Or, hey, you can just fork my CodePen and experiment there:

CodePen Embed Fallback Take an API configuration shortcut

If you are want to skip the complexity of figuring out exact API calls and looking to opt into variable axes of Recursive and make semi-advanced API calls, I have put together a simple configuration tool on the Recursive minisite (click the “Get Recursive” button). This allows you to quickly select pinned styles or variable ranges that you want to use, and even gives estimates for the resulting file size. But, this only exposes some of the API’s functionality, and you can get more specific if you want. It’s my attempt to get people using the most stylistic range in the smallest files, taking into account the current limitations of variable font instancing.

Use Recursive for code

Also, Recursive is actually designed first as a font to use for code. You can use it on CodePen via your account settings. Better yet, you can download and use the latest Recursive release from GitHub and set it up in any code editor.

Explore more fonts!

The Google Fonts API doc helpfully includes a (partial) list of variable fonts along with details on their available axis ranges. A couple of my favorites with axes beyond just Weight are Encode Sans (wdth, wght) and Inter (slnt, wght). You can also filter Google Fonts to show only variable fonts, though most of these results have only a Weight axis (still cool and useful, but don’t need custom URL configuration).

Some more amazing variable fonts are coming to Google Fonts. Some that I am especially looking forward to are:

  • Fraunces: “A display, “Old Style” soft-serif typeface inspired by the mannerisms of early 20th century typefaces such as Windsor, Souvenir, and the Cooper Series”
  • Roboto Flex: Like Roboto, but with an extensive ranges of Weight, Width, and Optical Size
  • Crispy: A creative, angular, super-flexible variable display font
  • Science Gothic: A squarish sans “based closely on Bank Gothic, a typeface from the early 1930s—but a lowercase, design axes, and language coverage have been added”
  • Commissioner: A low-contrast humanist sans-serif with almost classical proportions with three “voices” with stems that can be straight, flaired, or wedged.

And yes, you can absolutely download and self-host these fonts if you want to use them in your projects today. But stay tuned to Google Fonts for more awesomely-flexible typefaces to come!

Of course, the world of type is much bigger than open-source fonts. There are a bunch of incredible type foundries working on exciting, boundary-pushing fonts, and many of them are also exploring new & exciting territory in variable fonts. Many tend to take other approaches to licensing, but for the right project, a good typeface can be an extremely good value (I’m obviously biased, but for a simple argument, just look at how much typography strengthens brands like Apple, Stripe, Google, IBM, Figma, Slack, and so many more). If you want to feast your eyes on more possibilities and you don’t already know these names, definitely check out DJR, OHno, Grilli, XYZ, Dinamo, Typotheque, Underware, Bold Monday, and the many very-fun WIP projects on Future Fonts. (I’ve left out a bunch of other amazing foundries, but each of these has done stuff I particularly love, and this isn’t a directory of type foundries.)

Finally, some shameless plugs for myself: if you’d like to support me and my work beyond Recursive, please consider checking out my WIP versatile sans-serif Name Sans, signing up for my (very) infrequent newsletter, and giving me a follow on Instagram.

The post Getting the Most Out of Variable Fonts on Google Fonts appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Dark Ages of the Web

Css Tricks - Wed, 07/29/2020 - 1:17pm

A very fun jaunt through the early days of front-end web development. They are open to pull requests, so submit one if you’re into this kind of fun chronicling of our weird history!

That CSS3 Button generator really hits home. &#x1f62c;

Direct Link to ArticlePermalink

The post Dark Ages of the Web appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

style9: build-time CSS-in-JS

Css Tricks - Wed, 07/29/2020 - 4:27am

In April of last year, Facebook revealed its big new redesign. An ambitious project, it was a rebuild of a large site with a massive amount of users. To accomplish this, they used several technologies they have created and open-sourced, such as React, GraphQL, Relay, and a new CSS-in-JS library called stylex.

This new library is internal to Facebook, but they have shared enough information about it to make an open-source implementation, style9, possible.

Why another CIJ library?

There are already plenty of CSS-in-JS (CIJ) libraries, so it might not be obvious why another one is needed. style9 has the same benefits as all other CIJ solutions, as articulated by Christopher Chedeau, including scoped selectors, dead code elimination, deterministic resolution, and the ability to share values between CSS and JavaScript.

There are, however, a couple of things that make style9 unique.

Minimal runtime

Although the styles are defined in JavaScript, they are extracted by the compiler into a regular CSS file. That means that no styles are shipped in your final JavaScript file. The only things that remain are the final class names, which the minimal runtime will conditionally apply, just like you would normally do. This results in smaller code bundles, a reduction in memory usage, and faster rendering.

Since the values are extracted at compile time, truly dynamic values can’t be used. These are thankfully not very common and since they are unique, don’t suffer from being defined inline. What’s more common is conditionally applying styles, which of course is supported. So are local constants and mathematical expressions, thanks to babel’s path.evaluate.

Atomic output

Because of how style9 works, every property declaration can be made into its own class with a single property. So, for example, if we use opacity: 0 in several places in our code, it will only exist in the generated CSS once. The benefit of this is that the CSS file grows with the number of unique declarations, not with the total amount of declarations. Since most properties are used many times, this can lead to dramatically smaller CSS files. For example, Facebook’s old homepage used 413 KB of gzipped CSS. The redesign uses 74 KB for all pages. Again, smaller file size leads to better performance.

Slide from Building the New Facebook with React and Relay by Frank Yan, at 13:23 showing the logarithmic scale of atomic CSS.

Some may complain about this, that the generated class names are not semantic, that they are opaque and are ignoring the cascade. This is true. We are treating CSS as a compilation target. But for good reason. By questioning previously assumed best practices, we can improve both the user and developer experience.

In addition, style9 has many other great features, including: typed styles using TypeScript, unused style elimination, the ability to use JavaScript variables, and support for media queries, pseudo-selectors and keyframes.

Here’s how to use it

First, install it like usual:

npm install style9

style9 has plugins for Rollup, Webpack, Gatsby, and Next.js, which are all based on a Babel plugin. Instructions on how to use them are available in the repository. Here, we’ll use the webpack plugin.

const Style9Plugin = require('style9/webpack'); const MiniCssExtractPlugin = require('mini-css-extract-plugin'); module.exports = { module: { rules: [ // This will transform the style9 calls { test: /\.(tsx|ts|js|mjs|jsx)$/, use: Style9Plugin.loader }, // This is part of the normal Webpack CSS extraction { test: /\.css$/i, use: [MiniCssExtractPlugin.loader, 'css-loader'] } ] }, plugins: [ // This will sort and remove duplicate declarations in the final CSS file new Style9Plugin(), // This is part of the normal Webpack CSS extraction new MiniCssExtractPlugin() ] }; Defining styles

The syntax for creating styles closely resembles other libraries. We start by calling style9.create with objects of styles:

import style9 from 'style9'; const styles = style9.create({ button: { padding: 0, color: 'rebeccapurple' }, padding: { padding: 12 }, icon: { width: 24, height: 24 } });

Because all declarations will result in atomic classes, shorthands such as flex: 1 and background: blue won’t work, as they set multiple properties. Properties that can be can be expanded, such as padding, margin, overflow, etc. will be automatically converted to their longhand variants. If you use TypeScript, you will get an error when using unsupported properties.

Resolving styles

To generate a class name, we can now call the function returned by style9.create. It accepts as arguments the keys of the styles we want to use:

const className = styles('button');

The function works in such a way that styles on the right take precedence and will be merged with the styles on the left, like Object.assign. The following would result in an element with a padding of 12px and with rebeccapurple text.

const className = styles('button', 'padding');

We can conditionally apply styles using any of the following formats:

// logical AND styles('button', hasPadding && 'padding'); // ternary styles('button', isGreen ? 'green' : 'red'); // object of booleans styles({ button: true, green: isGreen, padding: hasPadding });

These function calls will be removed during compilation and replaced with direct string concatenation. The first line in the code above will be replaced with something like 'c1r9f2e5 ' + hasPadding ? 'cu2kwdz ' : ''. No runtime is left behind.

Combining styles

We can extend a style object by accessing it with a property name and passing it to style9.

const styles = style9.create({ blue: { color: 'blue; } }); const otherStyles = style9.create({ red: { color: 'red; } }); // will be red const className = style9(styles.blue, otherStyles.red);

Just like with the function call, the styles on the right take precedence. In this case, however, the class name can’t be statically resolved. Instead the property values will be replaced by classes and will be joined at runtime. The properties gets added to the CSS file just like before.

Summary

The benefits of CSS-in-JS are very much real. That said, we are imposing a performance cost when embedding styles in our code. By extracting the values during build-time we can have the best of both worlds. We benefit from co-locating our styles with our markup and the ability to use existing JavaScript infrastructure, while also being able to generate optimal stylesheets.

If style9 sounds interesting to you, have a look a the repo and try it out. And if you have any questions, feel free to open an issue or get in touch.

Acknowledgements

Thanks to Giuseppe Gurgone for his work on style-sheet and dss, Nicolas Gallagher for react-native-web, Satyajit Sahoo and everyone at Callstack for linaria, Christopher Chedeau, Sebastian McKenzie, Frank Yan, Ashley Watkins, Naman Goel, and everyone else who worked on stylex at Facebook for being kind enough to share their lessons publicly. And anyone else I have missed.

Links

The post style9: build-time CSS-in-JS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Bit on Web Component Libraries

Css Tricks - Tue, 07/28/2020 - 12:49pm

A run of Web Components news crossed my desk recently so I thought I’d group it up here.

To my mind, one of the best use cases for Web Components is pattern libraries. Instead of doing, say, <ul class="nav nav-tabs"> like you would do in Bootstrap or <div class="tabs"> like you would in Bulma, you would use a custom element, like <designsystem-tabs>.

The new Shoelace library uses the sl namespace for their components. It’s a whole pattern library entirely in Web Components. So the tabs there are <sl-tab-group> elements.

Why is that good? Well, for one thing, it brings a component model to the party. That means, if you’re working on a component, it has a template and a stylesheet that are co-located. Peeking under the hood of Shoelace, you can see this is all based on Stencil.

Another reason it’s good is that it means components can (and they do) use the Shadow DOM. This offers a form of isolation that comes right from the web platform. For CSS folks like us, that means the styling for a tab in the tab component is done with a .tab class (hey, wow, cool) but it is isolated in that component. Even with that generic of a name, I can’t accidentally mess with some other component on the page that uses that generic class, nor is some other outside CSS going to mess with the guts here. The Shadow DOM is a sort of wall of safety that prevents styles from leaking out or seeping in.

I just saw the FAST framework¹ too, which is also a set of components. It has tabs that are defined as <fast-tabs>. That reminds me of another thing I like about the Web Components as a pattern library approach: if feels like it’s API-driven, even starting with the name of the component itself, which is literally what you use in the HTML. The attributes on that element can be entirely made up. It seems the emerging standard is that you don’t even have to data-* prefix the attributes that you also make up to control the component. So, if I were to make a tabs component, it might be <chris-tabs active-tab="lunch" variation="rounded">.

Perhaps the biggest player using Web Components for a pattern library is Ionic. Their tabs are <ion-tabs>, and you can use them without involving any other framework (although they do support Angular, React, and Vue in addition to their own Stencil). Ionic has made lots of strides with this Web Components stuff, most recently supporting Shadow Parts. Here’s Brandy Carney explaining the encapsulation again:

Shadow DOM is useful for preventing styles from leaking out of components and unintentionally applying to other elements. For example, we assign a .button class to our ion-button component. If an Ionic Framework user were to set the class .button on one of their own elements, it would inherit the Ionic button styles in past versions of the framework. Since ion-button is now a Shadow Web Component, this is no longer a problem.

However, due to this encapsulation, styles aren’t able to bleed into inner elements of a Shadow component either. This means that if a Shadow component renders elements inside of its shadow tree, a user isn’t able to target the inner element with their CSS.

The encapsulation is a good thing, but indeed it does make styling “harder” (on purpose). There is an important CSS concept to know: CSS custom properties penetrate the Shadow DOM. However, it was decided — and I think rightly so — that “variablizing” every single thing in a design system is not a smart way forward. Instead, they give each bit of HTML inside the Shadow DOM a part, like <div part="icon">, which then gives us the ability to “reach in from the outside” with CSS, like custom-component::part(icon) { }.

I think part-based styling hooks are mostly fine, and a smart way forward for pattern libraries like this, but I admit some part of it bugs me. The selectors don’t work how you’d expect. For example, you can’t conditionally select things. You also can’t select children or use the cascade. In other words, it’s just one-off, or like you’re reaching straight through a membrane with your hand. You can reach forward and either grab the thing or not, but you can’t do anything else at all.

Speaking of things that bug people, Andrea Giammarchi has a good point about the recent state of Web Components:

Every single library getting started, including mine, suggest we should import the library in order to define what [sic] supposed to be a “portable Custom Element”.

Google always suggests LitElement. Microsoft wants you to use FASTElement. Stencil has their own Component. hyperHTML has their own Component. Nobody is just using “raw” Web Components. It’s weird! What strikes me as the worst part about that is that Web Components are supposed to be this “native platform” thing meaning that we shouldn’t need to buy into some particular technology in order to use them. When we do, we’re just as locked to that as we would be if we just used React or whatever.

Andrea has some ideas in that article, including the use of some new and smaller library. I think what I’d like to see is a pattern library that just doesn’t use any library at all.

  1. FAST calls itself a “interface system,” then a “UI framework” in consecutive sentences on the homepage. Shoelaces calls itself a “library” but I’m calling it a “pattern library.” I find “design system” to be the most commonly used term to describe the concept, but often used more broadly than a specific technology. FAST uses that term in the code itself for the wrapper element that controls the theme. I’d say the terminology around all this stuff is far from settled.

The post A Bit on Web Component Libraries appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The GitHub Profile Trick

Css Tricks - Mon, 07/27/2020 - 2:19pm

Monica Powell shared a really cool trick the other day:

The profile README is created by creating a new repository that’s the same name as your username. For example, my GitHub username is m0nica so I created a new repository with the name m0nica.

Now the README.md from that repo is essentially the homepage of her profile. Above the usual list of popular repos, you can see the rendered version of that README on her profile:

Lemme do a super simple version for myself real quick just to try it out…

OK, I start like this:

Then I’ll to go repo.new (hey, CodePen has one of those cool domains too!) and make a repo on my personal account that is exactly the same as my username:

I chose to initialize the repo with a README file and nothing else. So immediately I get:

I can edit this directly on the web, and if I do, I see more helpful stuff:

Fortunately, my personal website has a Markdown bio ready to use!

I’ll copy and paste that over.

After committing that change, my own profile shows it!

Maybe I’ll get around to doing something more fun with it someday. Monica’s post has a bunch of fun examples in it. My favorite is Kaya Thomas’ profile, which I saw Jina Anne share:

Inspired by @waterproofheart, I created a README for my @github profile! Illustrations by the amazing @yess_esse &#x1f603;

Really like that we can personalize our profiles like this now! pic.twitter.com/uOTheewnK9

— Kaya Thomas (@kthomas901) July 20, 2020

You can’t use CSS in there (because GitHub strips it out), so I love the ingenuity of using old school <img align="right"> to pull off the floating image look.

The post The GitHub Profile Trick appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS Vocabulary

Css Tricks - Mon, 07/27/2020 - 11:47am

This is a neat interactive page by Ville V. Vanninen to reference the names of things in the CSS syntax. I feel like the easy ones to remember are “selector,” “property,” and “value,” but even as a person who writes about CSS a lot, I forget some of the others. Like the property and value together (with the colon) is called a declaration. And all the declarations together, including the curly brackets (but not the selector)? That’s a declaration block, which is slightly more specific than a block, because a block might be inside an at-rule and thus contain other complete rule-sets.

Direct Link to ArticlePermalink

The post CSS Vocabulary appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.