Front End Web Development

Where Does Logic Go on Jamstack Sites?

Css Tricks - Mon, 08/24/2020 - 4:36am

Here’s something I had to get my head wrapped around when I started building Jamstack sites. There are these different stages your site goes through where you can put logic.

Let’s look at a special example so you can see what I mean. Say you’re making a website for a music venue. The most important part of the site is a list of events, some in the past and some upcoming. You want to make sure to label them as such or design that to be very clear. That is date-based logic. How do you do that? Where does that logic live?

There are at least four places to consider when it comes to Jamstack.

Option 1: Write it into the HTML ourselves

Literally sit down and write an HTML file that represents all of the events. We’d look at the date of the event, decide whether it’s in the past or the future, and write different content for either case. Commit and deploy that file.

<h1>Upcoming Event: Bill's Banjo Night</h1> <h1>Past Event: 70s Classics with Jill</h1>

This would totally work! But the downside is that weu’d have to update that HTML file all the time — once Bill’s Banjo Night is over, we have to open your code editor, change “Upcoming” to “Past” and re-upload the file.

Option 2: Write structured data and do logic at build time

Instead of writing all the HTML by hand, we create a Markdown file to represent each event. Important information like the date and title is in there as structured data. That’s just one option. The point is we have access to this data directly. It could be a headless CMS or something like that as well.

Then we set up a static site generator, like Eleventy, that reads all the Markdown files (or pulls the information down from your CMS) and builds them into HTML files. The neat thing is thatwe can run any logic we want during the build process. Do fancy math, hit APIs, run a spell-check… the sky is the limit.

For our music venue site, we might represent events as Markdown files like this:

--- title: Bill's Banjo Night date: 2020-09-02 --- The event description goes here!

Then, we run a little bit of logic during the build process by writing a template like this:

{% if event.date > now %}   <h1>Upcoming Event: {{event.title}}</h1> {% else %}   <h1>Past Event: {{event.title}}</h1> {% endif %}

Now, each time the build process runs, it looks at the date of the event, decides if it’s in the past or the future and produces different HTML based on that information. No more changing HTML by hand!

The problem with this approach is that the date comparison only happens one time, during the build process. The now variable in the example above is going to refer to the date and time the build happens to run. And once we’ve uploaded the HTML files that build produced, those won’t change until we run the build again. This means that once an event at our music venue is over, we’d have to re-run the build to make sure the website reflects that.

Now, we could automate the rebuild so it happens once a day, or heck, even once an hour. That’s literally what the CSS-Tricks conferences site does via Zapier.

The conferences site is deployed daily using a Zapier automation that triggers a Netlify deploy,, ensuring information is current.

But this could rack up build minutes if you’re using a service like Netlify, and there might still be edge cases where someone gets an outdated version of the site.

Option 3: Do logic at the edge

Edge workers are a way of running code at the CDN level whenever a request comes in. They’re not widely available at the time of this writing but, once they are, we could write our date comparison like this:

// THIS DOES NOT WORK import eventsList from "./eventsList.json" function onRequest(request) {   const now = new Date();   eventList.forEach(event => {     if (event.date > now) {       event.upcoming = true;     }   })   const props = {     events: events,   }   request.respondWith(200, render(props), {}) }

The render() function would take our processed list of events and turn it into HTML, perhaps by injecting it into a pre-rendered template. The big promise of edge workers is that they’re extremely fast, so we could run this logic server-side while still enjoying the performance benefits of a CDN.

And because the edge worker runs every time someone requests the website, we can be sure that they’re going to get an up-to-date version of it.

Option 4: Do logic at run time

Finally, we could pass our structured data to the front end directly, for example, in the form of data attributes. Then we write JavaScript that’s going to do whatever logic we need on the user’s device and manipulates the DOM on the fly.

For our music venue site, we might write a template like this:

<h1 data-date="{{event.date}}">{{event.title}}</h1>

Then, we do our date comparison in JavaScript after the page is loaded:

function processEvents(){   const now = new Date()   events.forEach(event => {     const eventDate = new Date(event.getAttribute('data-date'))     if (eventDate > now){         event.classList.add('upcoming')     } else {         event.classList.add('past')     }   }) }

The now variable reflects the time on the user’s device, so we can be pretty sure the list of events will be up-to-date. Because we’re running this code on the user’s device, we could even get fancy and do things like adjust the way the date is displayed based on the user’s language or timezone.

And unlike the previous points in the lifecycle, run time lasts as long as the user has our website open. So, if we wanted to, we could run processEvents() every few seconds and our list would stay perfectly up-to-date without having to refresh the page. This would probably be unnecessary for our music venue’s website, but if we wanted to display the events on a billboard outside the building, it might just come in handy.

Where will you put the logic?

Although one of the core concepts of Jamstack is that we do as much work as we can at build time and serve static HTML, we still get to decide where to put logic.

Where will you put it?

It really depends on what you’re trying to do. Parts of your site that hardly ever change are totally fine to complete at edit time. When you find yourself changing a piece of information again and again, it’s probably a good time to move that into a CMS and pull it in at build time. Features that are time-sensitive (like the event examples we used here), or that rely on information about the user, probably need to happen further down the lifecycle at the edge or even at runtime.

The post Where Does Logic Go on Jamstack Sites? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

This vs. That

Css Tricks - Mon, 08/24/2020 - 4:36am

Here’s a nice site from Phuoc Nguyen, who I’ve noted before has quite a knack for clever sites. This vs. That pits different related concepts against each other as a theme for an article. For example, CSS has display: none;, opacity: 0;, and visibility: hidden; and they all, on the surface “hide” something, but they are all markedly different in ways that are important to understand. That’s one of the articles. The content is open source as well, if you feel like adding anything.

This reminds me of this Pen from Adam Thompson:

CodePen Embed Fallback

All that Pen is doing is setting the colors of some pill boxes, but it does it in literally seven different ways — in this case, none of them are “better” than another:

  1. Swap a class
  2. Swap a class, colors defined in Sass @mixin
  3. Swap a class, class swaps value of a custom property
  4. Swap the value of a custom property
  5. Swaps the value of a custom property, colors stored in JavaScript only
  6. Set inline styles
  7. Manipulate the CSSOM
  8. Set a non-standard color attribute

They all ultimately do the same thing. And there could be many more: change class on a higher-up parent. Use data-* attributes. Use some kind of hue-shifting filter. Use color math in JavaScript to manipulate hues. Use the checkbox hack to change styling. Surely there are even dozens more.

Direct Link to ArticlePermalink

The post This vs. That appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Offering Options for mailto: and tel: Links

Css Tricks - Fri, 08/21/2020 - 11:33am

I generally like mailto: links. But I feel like I can smell a mailto: link without even inspecting or clicking it, like some kind of incredibly useless superpower. I know that if I’ve got my default mail client set, clicking that link will do what I want it to do, and if I want, I can right-click and the browser will give me a “Copy email address” option to grab it cleanly.

That’s cool and all, but Adam Silver and Amy Hupe recently enumerated the problems with how these links behave:

Firstly, mailto links make it hard to copy the address, for example if you want to share the email address with someone else.

Secondly, some users use more than one mail app, and the link just uses whichever has been setup as the default, without giving them the option to use the other.

And finally, many users don’t have an email application set up, which means the link can take them to a dead end or down a rabbit hole.

Their UI experimentation ended up using a mailto: link, but putting the entire email address as the link which makes it especially obvious what the link does, while also offering a Copy button for a little UX bonus.

tel: links are weirder in the sense that a good many devices looking at them don’t have any phone-calling functionality. If they do, it’s a lot like email links in that multiple apps could do that work (e.g. WhatsApp, FaceTime, or the default phone app).

The hard part of the UX of all this is offering users choice on what they want these special link types to do. That’s what mailgo is attempting to solve. It’s a little JavaScript library that offers UI when you click them.

Live demo:

CodePen Embed Fallback

I kinda like it. I wouldn’t mind at all if that popped up when I clicked a link like this, especially since it has that “open default” option if I want that anyway. Seems to check all the boxes for the problems these types of special links can have.

The post Offering Options for mailto: and tel: Links appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A CSS-only, animated, wrapping underline

Css Tricks - Fri, 08/21/2020 - 11:33am

Nicky Meuleman, inspired by Cassie Evans, details how they built the anchor link hover on their sites. When a link is hovered, another color underline kinda slides in with a gap between the two. Typical text-decoration doesn’t help here, so multiple backgrounds are used instead, and fortunately, it works with text that breaks across multiple lines as well.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post A CSS-only, animated, wrapping underline appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Leading-Trim: The Future of Digital Typesetting

Css Tricks - Fri, 08/21/2020 - 4:12am

leading-trim is a suggested new CSS property that lets us remove the extra spacing in every font so that we can more predictably style text. Ethan Wang has written about it — including how Microsoft has advocated for it — and that it’s now part of the Inline Layout Module Level 3 spec.

You’d use it like this:

h1 { leading-trim: both; text-edge: cap alphabetic; }

This is telling the browser to look at the font file, dig into the OpenType metrics, and effectively do what Ethan demonstrates in this gif:

Why do we want to do this? Well, it would let us space text inside a button properly without any strange hacks and we’d be able to set predictable spacing values between different typefaces too. I’m pretty excited about this spec and the CSS property because it gives us yet one more tool to control the use of typography on the web — like taming line height.

Direct Link to ArticlePermalink

The post Leading-Trim: The Future of Digital Typesetting appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Optimize Images with a GitHub Action

Css Tricks - Thu, 08/20/2020 - 11:56am

I was playing with GitHub Actions the other day. Such a nice tool! Short story: you can have it run code for you, like run your build processes, tests, and deployments. But it’s just configuration files that can run whatever you need. There is a whole marketplace of Actions wanting to do work for you.

What I wanted to do was run code to do image optimization. That way I never have to think about it. Any image in the repo has been optimized.

There is an action for this already, Calibre’s image-actions, which we’ll leverage here. You’ll also need to ensure Actions is enabled for the repo. I know in my main organization we only flip on Actions on a per-repo basis, which is one of the options.

Then you make a file at ./github/workflows/optimize-images.yml. That’s where you can configure this action. All your actions can have separate files, if you want them to. I made this a separate file because (1) it only works with “pushes to pull requests,” so if you have other actions that run on different triggers, they won’t mix nicely, and (2) That’s what is in their docs and looks like the suggested usage.

name: Optimize images on: pull_request jobs: build: name: calibreapp/image-actions runs-on: ubuntu-latest steps: - name: Checkout Repo uses: actions/checkout@master - name: Compress Images uses: calibreapp/image-actions@master with: githubToken: ${{ secrets.GITHUB_TOKEN }}

Now if you make a pull request, you’ll see it run:

That successful run then leaves a comment on the pull request saying what it was able to optimize:

It will literally re-commit those files back to the pull request as well, so if you’re going to stay on the pull request and keep working, you’ll need to push again before you can push to get the optimized images.

I can look at that automatic commit and see the difference:

The commit preview in Git Tower.

How I can merge the PR knowing all is well:

Pretty cool. Is optimizing your images locally particularly hard? No. Is never having to think about it again better? Yeah. You’re taking on a smidge of technical debt here, but reducing it elsewhere, which is a very fair trade, at least in my book.

The post Optimize Images with a GitHub Action appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

To grid or not to grid

Css Tricks - Thu, 08/20/2020 - 11:54am

Sarah Higley does accessibility work and finds that “tables and grids are over-represented in accessibility bugs.”

The drum has been banged a million times: don’t use a <table> for layout. But what goes around comes around. What’s the the #1 item in a list of “some of the ways tables and grids can go wrong”?

Using a grid when a table is needed, or vice versa

The day has come. CSS grid has dug its way into usage so deeply that developers are using it by default instead of using a classic <table>. And we don’t even have flying cars yet!

Sarah shows clear examples of both techniques and how the same information can be presented in different ways both visually and semantically. For example, a list of upcoming concerts can be displayed as a <table>, and that might be fine if you can imagine the purpose of the table being used for sorting or comparing, but it can also be presented as a grid, which has other advantages, like headers that are easier to skim.

Direct Link to ArticlePermalink

The post To grid or not to grid appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Make the Letter Bigger

Typography - Thu, 08/20/2020 - 7:25am

Read the book, Typographic Firsts

A brief history of the drop-cap: Decorated or illuminated initials were an important part of medieval manuscripts for a thousand years. From luxurious gold and silver letters to plain drop capitals, they functioned to illustrate, commentate, and adorn the text. Learn their history and purpose, why they eventually went out of fashion, and what replaced them.

The post Make the Letter Bigger appeared first on I Love Typography.

Let’s Make Generative Art We Can Export to SVG and PNG

Css Tricks - Wed, 08/19/2020 - 9:48am

Let’s say you’re a designer. Cool. You’ve been hired to do some design work for a conference. All kinds of stuff. Website. Printed schedules. Big posters for the rooms. Preroll slides. You name it.

So you come up with an aesthetic for it all — a design vibe that ties it all together and makes it feel cohesive. Yet each usage will be unique and different. Cool, let’s go from there.

You’re mucking around in your design software, and the aesthetic you come up with is these overlapping rectangles in a randomized pattern with a particular limited color palette that you think can work for all the materials.

Hey, sure. That’s a fun background pattern. You can lay white boxes on top of it to set type or whatever, this is just the general background aesthetic that you can use broadly.

But it’s not very random while it’s in design software, is it? I suppose you could figure out how to script the software. But we’re web people so let’s get webby with it. Let’s lean on JavaScript and SVG to start.

We could define our color palette programmatically like:

const colorPalette = ["#9B2E69", "#D93750", "#E2724F", "#F3DC7B", "#4E9397"];

Then write a function that just makes a bunch of random rectangles based on a minimum and maximum value you give it:

const rand = (max) => { return Math.floor(Math.random() * max); }; const makeRects = (maxX, maxY) => { let rects = ""; for (let i = 0; i < 100; i++) { rects += ` <rect x="${rand(maxX + 50) - 50}" y="${rand(maxY + 50) - 50}" width="${rand(200) + 20}" height="${rand(200) + 20}" opacity="0.8${rand(10)}" fill="${colorPalette[rand(5)]}" /> `; } return rects; };

You could call that function and slap all those rectangles in an <svg> and get some nice generative artwork.

Now your work is easy! To make new ones, you run the code over and over and the you get nice SVG to use for whatever you need.

Let’s say your client is asking you for some of this artwork to use as backgrounds on other things they are working on too. They need a background with different dimensions! At a different aspect ratio! They need it right now!

The fact that we’re doing this in the browser is awfully helpful here. The browser window can be resized easily. Wow, I know. So let’s size the parent SVG to the entire viewport. This is the SVG that calls that function to make all the random rectangles here:

const makeSVG = () => { const w = document.body.offsetWidth; const h = document.body.offsetHeight; const svg = `<svg width="${w}" height="${h}"> ${makeRects(w, h)} </svg>`; return svg; };

So, if we’re doing this in the browser, we’ll get a wide and squat SVG result if the browser is super wide and squat:

But how do we get that out of the browser and into an actual SVG file? Well, there are probably native platform ways to do it, but I just Google’d my way out of it and found a snippet of code that did the trick. I take the SVG as a string, chuck it in a data URL as the href on a link, and fake-click that link. I do that on the click of a button.

function download(filename, text) { var pom = document.createElement("a"); pom.setAttribute( "href", "data:text/plain;charset=utf-8," + encodeURIComponent(text) ); pom.setAttribute("download", filename); if (document.createEvent) { var event = document.createEvent("MouseEvents"); event.initEvent("click", true, true); pom.dispatchEvent(event); } else { pom.click(); } } const downloadSvgButton = document.querySelector("#download-svg-button"); downloadSvgButton.addEventListener("click", () => { download("art.svg", window.globalSVGStore); });

But I need is as a PNG!

…cries your client. Fair enough. Not everyone has software that can view and deal with SVG. You could just take a screenshot of the page. And, honestly, that might be a good way to go. I have a high pixel density display and those screenshots turn out great.

But now that we’ve built a downloader machine for the SVG, we might as well make it work for PNG too. This time my Googling led to FileSaver.js. If I have a <canvas>, I can toBlob that thing and save it to a file. And I can turn my <svg> into a <canvas> via canvg.

So, when we call our function to make the SVG, we’ll just paint it to a canvas, which will automatically be the same size as the SVG, which we’ve sized to cover the viewport.

const setup = () => { const v = canvg.Canvg.fromString(ctx, makeSVG()); v.start(); };

We can call that setup function whenever, so might as well make a button for it, too, and call it when the browser window resizes. Here it is in action:

And here’s the final thing:

CodePen Embed Fallback

It could be a lot smarter. It could decide how many rectangles to draw based on the viewport volume, for example. I just think it’s very neat to essentially build an art-generating machine for making design assets, particularly to solve real-world client problems.

This idea was totally taken from a peek I had at a tool some actual designers built like this. Theirs was way cooler and had even more options, and I know who they built it for was very happy with it because that’s who showed it to me. I reached out to that designer but they were too busy to take on a writing gig like this.

The post Let’s Make Generative Art We Can Export to SVG and PNG appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 3: The Website

Css Tricks - Wed, 08/19/2020 - 9:48am
Previously in web history…

Berners-Lee, motivated by his own curiosity, creates the World Wide Web at CERN. He releases its technologies to the public domain, which enables the development of several new browsers for every operating system. Mosaic proves to the most popular, and its introduction of color images directly inline in content changes fundamentally the way people think about the web.

The very first website was about the web. That kind of thing is not all that unusual. The first email sent to another person was about email As technology progresses, we may have lost a bit of theatrics. The first telegraph, for instance, read “WHAT HATH GOD WROUGHT.” However, in most cases, telecommunication firsts follow this meta template.

Anyway, the first website was instructive for a reason. If you were a brand new web user, it is the first thing you would see. If that page didn’t manage to convince you the web was worth sinking a bit of time into, then that was the end of the story. You’d go and check out Gopher instead. So, as a starting point for new web users, the first website was critical.

The URL was info.cern.ch. Its existence on the CERN server should be of no surprise. The first website was created by the web’s inventor, Tim Berners-Lee, while he was still working there.

It was a simple page. A list of headers and links — to download web browser code, find out more info about the web, and get all of the technical details — was divided only by short descriptions o f each section. One link brought you to a list of websites. Berners-Lee collected a list of links that were sent to him, or plucked them from mailing lists whenever he found them. Every time he found a link he added it to the CERN website, loosely organized by category. It was a short list. In July of 1993, there were still only about 130 websites in the world.

(A few years back, some enterprising folks took it upon themselves to re-create the first website at CERN. So you can go and browse it now, just as it was then.)

As far as websites go, it was noting spectacular. The language was plain enough, though a bit technical. The instructions were clear, as long as you had some background in programming or computers. The web before the web was difficult to explain. The primary goal of the website was to prompt a bit of exploration from those who visited it. By that measure, it was successful.

But Berners-Lee never meant for the CERN website to be the most important page on the web. It was just there to serve as an example for others to recreate in their own image.

Tim Berners-Lee also created the first browser. It gave users the ability to both read — and crucially to publish — websites. In his conception, each consumer of the web would have their own personal homepage. The homepage could be anything. For most people, he thought, it would likely be a private place to store personal bookmarks or jot down notes. Others might chose to publish their site for the public, using it as an opportunity to introduce themselves, or explore some passion (similar to what services like Geocities would offer later). Berners-Lee imagined that when you opened your browser, any browser, your own homepage would be the first thing that you saw.

By the time other browsers hit the market, the publishing capabilities faded away. People were left to simply surf, and not to author, the web. For the earliest of web users, the CERN website remained a popular destination. With usage still growing, it was the best place to find a concise list of websites. But if the web was going to succeed — truly succeed — it was going to have to be more than links. The web was going to need to find its utility.

Fortunately Berners-Lee had created the URL. Anyone could create a website. Heck, he’d even post a link to it.

“Louise saw the web as a godsend,” Berners-Lee wrote in his personal retelling of the web’s history. The Louise in question is Louise Addis, librarian at SLAC for over 40 years before she retired in the mid-90s. Along with Paul Kunz, Tony Johnson, and several others, she helped create the first web server in the United States and one of the most influential websites of the early web. She would later put it a bit differently. “The Web was a revolution!” That may be true, but it wouldn’t have been a revolution if not for what she helped create.

As we found in the first chapter, Berners-Lee’s curiosity led him on a path to set information free. Louise Addis was also curious. Her curiosity led her to try to connect people to that information. She studied International Relations at Stanford University only to bounce around at a few jobs and land herself back at her alma mater working for a secret research lab known simply as Project M in 1960. Though she had no experience in the field, she worked there as a librarian, eventually moving up to head librarian. After a couple of years, the lab would go public and become formally known as the Stanford Linear Accelerator Center, or SLAC.

SLAC’s primary mission was to advance the research of American scientists in the wake of World War II. It houses a two-mile long linear accelerator, the longest in the world. SLAC recruits scientists across a broad set of fields, but its primary focus is particle physics. It has produced a number of Nobel prizes and has shared groundbreaking new discoveries across the world.

Research is at the center of the work done at SLAC. While she was there, Addis was relentless in her quest to connect her peers with research. When she learned that there wasn’t a good system for keeping track of the multitude of authors attributed to particle physics papers (some had over 1,000 authors on a single paper), she picked up a bit of programming with no formal training. “If I needed to know something, I asked someone to show me how to do a particular task. Then I went back to the Library and tried it on my own.”

A couple of years after she discovered the web, Addis would start the first unofficial tech support group for web newcomers known as the WWW Wizards. The Wizards worked — mostly in their spare time — to help new web users come online. They were a profoundly important resource for the early web. Addis continually made it her mission to help people find the information they needed.

She used her ad-hoc programming experience in the late 1960’s to create the SPIRES-HEP database, a digital library with hundreds of thousands of bibliographic records for particle physics papers. It is still in use today, though it’s newest iteration is called INSPIRE-HEP. The SPIRES-HEP database was a foundational resource. If you were a particle physics researcher anywhere in the world, you would be accessing it frequently. It ran on an IBM mainframe that looked like this:

The mainframe used a very specific programming language also developed by IBM, which has since gone into disuse. Locked inside was a very well organized bibliography of research papers. Accessing it was another thing entirely. There were a few ways to do that.

The first required a bit of programming knowledge. If you were savvy enough, you could log directly into the SPIRES-HEP database remotely and, using the database-specific SPIRES query language, pull the records you needed directly from the mainframe. This was the quickest option, but required the most technical know-how and a healthy dose of tenacity. Let’s consider this method the high bar.

The middle bar was an interface built by SLAC researcher Paul Kunz that let you email the server to pull out the records you needed. You still needed to know the SPIRES query language, but it solved the remote access part of the equation.

The low bar was to email or message a librarian at SLAC so they could pull the record for you and send it back. The easiest bar to clear, this was the method that most people used. Which meant that the most widely accessed particle physics database in the world was beset by a bottleneck of librarians at SLAC who needed to ferry bibliographic records back and forth from researchers.

The SPIRES-HEP database was invaluable, but widespread access remained its largest obstacle.

For a second time in the web’s history, the NeXT computer played an important role in its fate. For a computer that was short-lived, and largely unheard of, it is a key piece of the web’s history.

Like Tim Berners-Lee, SLAC physicist Paul Kunz, creator of the SPIRES-HEP instant messaging and email service, used a NeXT computer. When Berners-Lee called him into his office on one of his visits, Berners-Lee invited him into his office. The only reason Kunz agreed to go was to see how somebody else was using a NeXT computer. While he was there, Berners-Lee showed Kunz the web. And then Kunz went back to SLAC and showed the web to Addis.

Kunz and Addis were both enthusiastic purveyors of research at SLAC. They each played their part in advancing information discovery. When Kunz told Addis about the web, they both had the same idea about what to do with it. SLAC was going to need a website. Kunz built a web server at Stanford — the first in the United States. Addis, meanwhile, wrangled a few colleagues to help her build the SLAC website. The site launched on December 12, 1991, a year after Berners-Lee first published his own website at CERN.

Most of the programmers and researchers that began tinkering on the web in the early days were drawn by a nerdy fascination. They liked to play around with browsers, mess around with some code. The website was, in some cases, the mere after-effect of a technological experiment. That wasn’t the case for Addis. The draw of the web wasn’t its technology. It was what it enabled her to do.

The SLAC website started out with two links. The first one let you search through a list of phone numbers at SLAC. That link wasn’t all that interesting. (But it was a nice nod to the web’s origin. The most practical early use of the web was as an Internet-enabled phonebook at CERN.) The second link was far more interesting. It was labeled “HEP.” Clicking on it brought you to a simple page with a single text field. Type a query into that field, click Enter and you got live results of records directly from the SPIRES-HEP database. And that was the SLAC website. Its primary purpose was to act as an interface in front of the SPIRES-HEP database and pull down queried results.

When Berners-Lee demoed the SLAC website a couple of months later at a conference, it was met with wild applause, practically a standing ovation.

The importance was obviously not lost on that audience. No longer would researchers be forced to wrestle with complicated programming languages, or emails to SLAC librarians. The SLAC website took the low bar of access for the SPIRES-HEP database and dropped it all the way to the floor. It made searching the database easy (and within a couple of years, it would even add links to downloadable PDFs).

The SLAC website, nothing more than a searchable bibliography, was the beginning of something on the web. Physicists began using it, and it rebounded from one research lab to the next. The web’s first micro-explosion happened the day Berners-Lee demoed the site. It began reverberating around the physics community, and then outside of it.

SLAC was the website that showed what the would could do. GNN was going to be the first that made the web look good doing it.

Global Network Navigator was going to be exciting. A bold experiment on and with the web. The web was a wall of research notes and scientific diagrams; plain black text on stark white backgrounds as far as the eye could see. GNN would change that. It would be fun. Lively. Interactive.

That was the pitch made to designer Jennifer Robbins by O’Reilly co-founder Dale Dougherty in 1993. Robbins’ mind immediately jumped to the possibilities of this incredible, new, digital medium.

She met with another O’Reilly employee, Rob Raisch. A couple of years after that pitch, Raisch would propose one of the first examples of a stylesheet. At the time, he was just the person at the company who happened to know the most about the web, which had only recently cracked a hundred total sites. When Robbins walked into his office, the first thing he said to her was: “You know, you probably can’t do what you want.” He had a point. The language of the web was limiting. But the GNN team was going to find a way around that.

GNN was the brainchild of Dale Dougherty. By the early 90s, Dougherty had become a minor celebrity for experiments just like this one. From the early days of O’Reilly media, the book publisher he co-founded, he was always cooking up some project or another.

Wherever technology is going, Dougherty has a knack for being there first. At one conference early on in O’Reilly’s history, he sold self-printed copies of a Unix manual for $5 apiece just before Unix exploded on the scene. After spending decades in book publishing, he’s recently turned his attention to the maker culture. He has been called a godfather of the Maker movement.

That was no less true for the web. He became one of the web’s earliest adopters and its most prolific early champion. He brought together Tim Beners-Lee and the developers of NCSA Mosaic, including Marc Andreessen, for the first time in a meeting in Cambridge. That meeting would eventually lead to the creation of the W3C. He’d be responsible for early experiments with web advertising, basically on the first day advertising was allowed. He would later coin the term Web 2.0, in the wake of transformation after the dot-com boom. Dougherty loved the web.

But staring at the web for the first time in the early 90s, he didn’t exactly know what to do with it. His first thought was to put a book on the web. After all, O’Reilly had a gigantic back catalog, and the web was mostly text. But Dougherty knew that the web’s greatest asset was the hyperlink. He needed a book that could act as a springboard to bring people to different parts of the web. He found it in the newly-published bestseller by author Ed Krol, The Whole Internet User’s Guide and Catalog. The book was a guided tour through the technologies of the Internet. It had a paragraph on the web. Not exactly a lot, but enough for Dougherty to make the connection.

Dougherty had recruited Pei-Yuan Wei, creator of the popular ViolaWWW browser to make an earlier version of an interactive Internet guide. But he pulled a together a production team — led by managing editor Gina Blaber — of writers, designers, programmers, and sales staff. They launched GNN, the web’s first true commercial website, in early 1993.

GNN was created before any other commercial websites, before blogs, and online magazines. Digital publishing was something new altogether. As a result, GNN didn’t quite know what it wanted to be. It operated somewhere between a portal and a magazine. Navigating the site was an exercise in tumbling down one rabbit hole after another.

In one section, the site included the Whole Internet Catalog repurposed and ported to the web. Contained within were pages upon pages of best-of lists; collections of popular websites sorted into categories like finance, literature and cooking.

Another section, labeled GNN Magazine, jumped to a different group of sortable webpages known as metacenters. These were, in the website’s own description, “special-interest magazines that gather together the best Internet resources on topics such as travel, music, education, and computers. Each metacenter contains articles, columns, reference guides, and discussion groups.” Though conceptually similar to modern day media portals, the nickname “metacenter” never truly caught on. The site’s content and design was produced and maintained by the GNN staff. Not to be outdone by their print predecessors, GNN magazine contained interviews, features, biographies, and explainers. One hyperlink after another.

Over time, GNN would expand to affiliated publications. When the Mosaic team got too busy working on the web’s most popular browser, they handed off their browser homepage to the GNN team. The page was called What’s New, and it featured the most interesting links around the web for the day. The GNN seized the opportunity to expand their platform even further.

Explaining what GNN was to someone who had never heard of the web, let alone a website, was an onerous task. Blaber explained GNN as giving “users a way to navigate through the information highway by providing insightful editorial content, easy point-and-click commands, and direct electronic links to information resources.” That’s a meaningful description of the site. It was a way into the web, one that wasn’t as fractured or unorganized as jumping in blind. It was also, however, the kind of thing you needed to see to understand.

And it was something to see. Years before stylesheets and armed with nothing but a handful of HTML tags, the GNN team set about creating the most ambitious project with the web medium yet. Browsers had only just begun allowing inline graphics, and GNN took full advantage. The homepage in particular featured big colorful graphics, including the hot air balloon that would endure for years as the GNN logo. They laid out their pages meticulously — most pages had a unique design. They used images as headers to break up the page. Most pages featured large graphics, and colored text and backgrounds. Wherever the envelope was, they’d push it a little further.

The result: a brand new kind of interactive experience. The web was a sea of plain websites with no design mostly coming from research institutions and colleges. Before Mosaic, bold graphics and colors weren’t even possible. And even after Mosaic’s release, the web was mostly filled with dense websites of scrolling text with nothing more than scientific diagrams to break it up, or sparse websites with a link, an email and a phone number. Most sites had nothing in the way of hierarchy or interactivity. Content was difficult to follow unless it was exactly what you were looking for. There was a ton of information on the web, but no one had thought to organize it to any meaningful degree. Imagine seeing all of that, day after day, and then one day you click a link and come to this:

It looks dated now, but a splash page with bold colors and big graphics, organized into sections and layered with interesting content… that was something to see.

The GNN team was creating the rules of web design, a field that had yet to be invented. In the first few years of the web, there were some experiments. The Vatican had scanned a number of materials from its archives and put them on a website. The Exploratorium took that one step further, creating the first online museum, with downloadable sounds and pictures. But they were still very much constrained by the simplicity of the web experience. Click this link, download this file, and that was it. GNN began to take things further. Dale Dougherty recalls that their goal was to “shift from the Internet as command line retrieval to the internet as this more digital interface… like a book.” A perfectly reasonable goal for a book publisher but a tall order for the web.

To accomplish their goal, GNN’s staff used the rules of graphic design as a roadmap (as philosopher Marshall McLuhan once said, “the content of any medium is always another medium”). But the team was also writing a brand new rulebook, on the fly, as they went. There were open questions about how to handle web graphics, new patterns for designing user interfaces, and best practices for writing HTML. Once the team closed one loop, they moved on to the next one. It was as if they writing the manual for flying a rocketship — while strapped to the wings and hurtling towards space.

As browsers got better, GNN evolved to take advantage of the latest design possibilities. They began to use image maps to make more complex navigation. They added font tags and frames. GNN was also the first site on the web with a sponsored link, and even that was careful and considered. Before the popup would plague our browsing experience, GNN created simple, unobtrusive, informational adverts inserted in between their other listings.

GNN provided a template for the commercial web. As soon as they launched, dozens of copycats quickly followed. Many adopted a similar style and tone. Within a few years, web portals and online magazines would become so common they were considered trite and uninteresting. But very few sites that followed it had the lasting impact GNN did on a new generation of digital designers.

Ranjit Bhatnagar has an offbeat sort of humor. He’s a philosopher and a musician. He’s smart. He’s a fan of the weird and the banal. He’s anti-consumerist, or at the very least, opposed to consumerist culture. I won’t go as far as to say he’s pedantic, but he certainly revels in the most minute of details. He enjoys lively debates and engaged discourse. He’s fascinated by dreams, and once had a dream where he was flying through the air with his mother taking in the sights.

I’ve never met Bhatnagar. I know all of this because I read it on his website. Anyone can. And his website started with lunch.

Bhatnagar’s website was called Ranjit’s HTTP Playground. Playground describes it rather well; hyperlinks are scattered across the homepage like so many children’s toys. One link takes you to a half-finished web experiment. Another takes you to a list of his favorite bookmarks arranged by category. Yet another might contain a rant about the web, or a long-winded tribute to Kinder eggs. If you’re in the mood for a debate you can post your own thoughts to a page devoted to the single question: Are nuts wood? There’s still no consensus on that one.

Browsing Ranjit’s HTTP Playgroundis like peeling back the layers of Bhatnagar’s brain. He added new entries to his site pretty regularly, never more than a sentence or two, arranged in a series of dated bullet points. Pages were laid out on garish backgrounds, scalding bright green on jet black, or surrounded by a dizzying dance of animated GIFs. Each page was littered with links to more pages, seemingly at random. Every time you think you’ve reached the end of a thread, there’s another link to click. And every once in a while, you’ll find yourself back on the homepage wondering how you got there and how much time had passed in the meantime. This was the magic of the early web.

Bhatnagar first published his website in late 1993, just a few months after the GNN website went up. The very first thing Bhatnagar posted to his website was what he ordered for lunch every day. It was arranged in reverse chronological order, his most recent lunch order right at the top.

SLAC captured the utility of the web. GNN realized its popular appeal. Bhatnagar, and others like him, made the web personal.

Claudio Pinhanez began adding daily entries to the MIT Media Lab website in 1994. He posted movie and book reviews, personal musings, and shared his favorite links. He followed the same format as Bhatnagar’s Lunch Server. Entries were arranged on the page in reverse chronological order. Each entry was short and to the point — no longer than a sentence or two. This movie was good. This meal was bad. Isn’t it interesting that… and so on.

In early 1995, Carolyn Burke began posting daily entries to her website in one of the earliest examples of an online diary. Each one was a small slice from her life. The posts were longer than the short-burst of Pinhanez and Bhatnagar. Burke took her time with narrative anecdotes and meandering asides. She was loquacious and insightful. Her writing was conversational, and she promised readers that she would be honest. “I notice now that I have held back in being frank. My academic analysis skills come out, and I write with them things that I’ve known for a long time,” she wrote in an entry from the first few months, “But this is therapy for me… honesty and freedom therapy. Wow, that’s a loaded word. freedom.

Perhaps no site was more honest, or more free as Burke puts it, than Links from the Underground. Its creator, Swarthmore undergraduate Justin Hall, had transformed inviting others into his life into an art form. What began as a simple link dump quickly transformed into a network of short stories and poems, diary entries, and personal details from his own life. The layout of the site matched that of Bhatnagar, scattered and unorganized. But his tone was closer to Burke’s, long and deeply, deeply personal. Just about every day, Hall would post to his website. It was his daily inner monologue made public.

Sometimes, he would cross a line. If you were a friend of Justin’s, he might share a secret that you told him in confidence, or disparage you on a fully public post. But he also shared the most intimate details from his own life, from dorm room drama to his greatest fears and inadequacies. He told stories from his troubled past, and publicly tried to come to terms with an alcoholic father. His good humor was often tinged with tragedy. He was clearly working through something emotional and personally profound, and he was using the web to do it out in the open.

But for Hall, this was all in the service of something far greater than himself. Describing the web to newcomers in a documentary about his experience on the web, Hall’s primary message was about its ability to create — not to tear down — connections.

What’s so great about the web is I was able to go out there and talk about what I care about, what I feel strongly about and people responded to it. Because every high school’s got a poet, whether it’s a rich high school or a poor high school, you know, they got somebody that’s in to writing, that’s in to getting people to tell their stories. You give them access to this technology and all of a sudden they’re telling stories to people in Israel, to people in Japan, to people in their own town that they never would have been able to talk to. And that’s, you know, that’s a revolution.

There’s that word again. Revolution. Though coming at the web from very different places, Addis and Hall agreed on at least one thing. I would venture to guess that they agreed on a whole lot more.

Justin Hall became a presence on the web not soon forgotten by those that came across him. He’s had two documentaries made about him (one of which he made himself). He’s appeared on talk shows. He’s toured the country. He’s had very public mental breakdowns. But he believed deeply that the web meant nothing at all unless it was a place for people to share their own stories.

When Tim Berners-Lee first imagined the web, he believed that everybody would have their own homepage. He designed his first browser with authoring capabilities for just that reason. That dream never came true. But Hall and Burke and Bhatnagar channeled a similar idea when they decided to make the web personal. They created their own homepages, even if it meant having to spend a few hours, or a few weeks, learning HTML.

Within a couple of years, the web filled up with these homepages. There were some notable breakthrough websites, like when David Farley began posting daily webcomics to Doctor Fun or VJ Adam Curry co-opted the MTV website to post his own personal brand of music entertainment. There were extreme examples. In 1996, Jennifer Ringley stuck a webcam in her room and beamed images every few seconds, so anyone could watch her entire life in real time. She called it Jennicam, a name that would ultimately lead to the moniker cam girl. Ringley appeared on talk shows and became an overnight sensation for her strange website that let others peer directly into her world.

But mostly, homepages acted as a creative outlet — short biographies, photo albums of families and pets, short stories, status updates. There were a lot of diaries. People posted their art, their “hot takes” and their deepest secrets and greatest passions. There were fan pages dedicated to discontinued television shows and boy bands. A dizzying array of style and personality with no purpose other than to simply exist.

Then came the links. At the bottom of a homepage: a list of links to other homepages. Scattered in diary posts, links to other websites. In one entry, Hall might post a link to Bhatnagar’s site, musing about the influence it had on his own website. Bhatnagar’s own site had his own chaotic list of his favorites. Eventually, so did Burke’s. Half the fun of a homepage was obsessing over which others to share.

As the web turned on a moment of connection, the process of discovery became its greatest asset. The fantastic intrigue of clicking on a link and being transported into the world and mind of another person was — in the end — the defining feature of the web. There would be plenty of opportunities to use the web to find something you want or need. The lesson of the homepage is that what people really wanted to find was each other. The web does that better than any technology that has come before it.

At the end of 1993, there were just over 600 websites. One year later, at the end of 1994, there were over 10,000. They no longer fit on a single page on the CERN website maintained by the web’s creator.

The personal website would become the cornerstone of the web. The web would be filled with more applications, like SLAC. And more businesses, like GNN. But it would mostly be filled with people. When the web’s next wave came crashing down, it would become truly social.

The post Chapter 3: The Website appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Can you get valid CSS property values from the browser?

Css Tricks - Tue, 08/18/2020 - 1:01pm

I had someone write in with this very legit question. Lea just blogged about how you can get valid CSS properties themselves from the browser. That’s like this.

CodePen Embed Fallback

That gives you, for example, the fact that cursor is a thing. But then how do you know what valid values are for cursor? We know from documentation that there are values like auto, none, help, context-menu, pointer, progress, wait, and many more.

But where does that list come from? Well, there is a list right in the spec so that’s helpful. But that doesn’t guarantee the complete list of values that any given browser actually supports. There could be cursor: skull-and-crossbones and we wouldn’t even know!

We can test by applying it to an element and looking in DevTools:

Damn.

But unless we launch a huge dictionary attack against that value, we don’t actually know what values it directly in-browser. Maybe Houdini will help somehow in browsers getting better at CSS introspection?

You can also use the CSS object to run tests like CSS.supports(property, value):

Damn.

You’d think we could have like CSS.validValues("text-decoration-thickness") and get like ["<length>", "<percentage>", "auto", "from-font"] or the like, but alas, not a thing.

The post Can you get valid CSS property values from the browser? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Timer Bars in CSS with Custom Properties

Css Tricks - Tue, 08/18/2020 - 9:04am

I was working on a thing the other day that needed a visible timer. There was UI precedent for this type of timer on the project. People didn’t want to see numbers ticking downward; it was more ideal to see a “bar” drain away from full to empty. I mention that because there are tons and tons of ways you could approach a “timer” UI. This isn’t an exploration of all of those (a search on CodePen would be more helpful there), but an exploration of the one way that was useful to me.

The kind of timer I needed was what the project called a “round time” bar. An action is performed. It may cause a round time, and most further actions are blocked until the round time is over. So, a very clear red bar that ticks away was the right UI. It gives a sense of rhythm and flow where you can kinda feel the end of the timer and time your next action.

a linear animation that shrinks the bar to zero.

Setting this up is fairly easy…

Let’s give ourselves a parent/child thing, just in case we want to style the empty part of the container at some point.

<div class="round-time-bar"> <div></div> </div>

For now, let’s just style the bar inside.

.round-time-bar div { height: 5px; background: linear-gradient(to bottom, red, #900); }

That gives us a nice little red bar we can use for the time indicator.

Next we need to make it tick down, but here’s where we need to think about functionality. A timer like this needs to know how long it’s timing! We can give it that information right in the HTML. This doesn’t mean we’re avoiding JavaScript — we’re embracing it. We’re saying, “hey JavaScript, please give us the duration as a variable and we’ll take it from there.”

<div class="round-time-bar" style="--duration: 5;"> <div></div> </div>

In fact, this way is very friendly to modern DOM-handling JavaScript. As long as that --variable is correct, it is free to re-render that DOM element at any time and we can make sure the design handles that just fine. We’ll make a variation that does that.

For now, let’s make the animation happen. Good news, it’s easy. Here’s a one-liner keyframe:

@keyframes roundtime { to { /* More performant than animating `width` */ transform: scaleX(0); } }

We can “squish” the bar because the design of the bar doesn’t have anything that will look squished when we scale it horizontally. If we did, we could animate the width. It’s not that big of a deal, especially since it doesn’t reflow anything else.

Now we apply it to the bar:

.round-time-bar div { /* ... */ animation: roundtime calc(var(--duration) * 1s) steps(var(--duration)) forwards; transform-origin: left center; }

See how we’re yanking that --duration variable to set the duration of the animation? That does the heavy lifting. I’m also using it to set the same number of steps() so it “ticks” down. That “ticking” might be a visual UI thing that you like (I do), but it also accommodates the idea that JavaScript might re-render this bar at any time, and the ticks make it so you are less likely to notice. I used an integer for the duration value so that it could do double-duty like this.

If you want a smooth animation though, we could do that as a variation, like:

<div class="round-time-bar" data-style="smooth" ... />

Then not do the steps:

.round-time-bar[data-style="smooth"] div { animation: roundtime calc(var(--duration) * 1s) linear forwards; }

Note we’re also using a linear animation, which seems to make sense for a timer. Time, as it were, doesn’t ease. Or does it? Whatever, it’s your call. If you want a timer that appears to speed up or slow down at certain points, go for it.

We can use the same variation data-attribute-driven API for things like color variations:

.round-time-bar[data-color="blue"] div { background: linear-gradient(to bottom, #64b5f6, #1565c0); }

And one final variation is making each “second” a fixed width. That way, a 10 second timer will literally look longer than a 5 second timer:

.round-time-bar[data-style="fixed"] div { width: calc(var(--duration) * 5%); }

Here’s the demo:

CodePen Embed Fallback

Notice the little trick in there for restarting CSS animation.

Oh, and hey, I know there is a <meter> element which is maybe a bit more semantic, but it brings it’s own UI which isn’t animatable like I wanted things to be here — at least not without fighting it. But I wonder if it’s more accessible? Does it announce its current value in a useful way? Would it be a more accessible timer if we were updating a <meter> in real-time with JavaScript? If anyone knows, I can link up a solution here.

The post Timer Bars in CSS with Custom Properties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Queue Jumping in Netlify

Css Tricks - Tue, 08/18/2020 - 6:07am

Cutting to the chase: if you’re on a Business or Enterprise team on Netlify, you can click a build to make it run next in a queue. For example, if you have a really time-sensitive thing (e.g. a bug fix going to production), it can jump ahead of some random development branch building. Now I’ll elaborate.

Part of the rocketjuice of Netlify is that it runs your builds for you. Say you have a Jekyll site. The build command is probably jekyll build. You tell Netlify that’s the command you want it to run, and if successful, deploy it.

You can set the build command from a configuration file in the repo, or here in the UI for settings.

That build command is totally up to you. It could be npm run build and that calls the build command in your package.json which kicks off your custom scripts. Plus, with build plugins, you have a ton of control over the process (e.g. I got it to run Sass easily). That’s CI/CD!

Assuming you are linking up a Git repo, it’s not just pushing to your main branch where these builds runs — it’s on any branch. That’s great for a bunch of reasons. For one, your build is probably running tests too, so it’s keeping you honest. For another, Netlify gives each push a permalink to a deployed version of that exact set of code. That’s tremendously useful. It’s like staging on steroids. Anybody who needs it can get a preview of the site.

On certain projects, you might have a whole team of developers working on a bunch of branches, committing code, and running builds. So Netlify might be awful busy doing all that work. Your build might get stuck behind other people’s stuff. Maybe it absolutely doesn’t matter. Or maybe you have an important meeting in 2 minutes and you really need this deploy preview for everyone to see.

Phil prioritizing some kind of musical coffee over the conference site build.

Now if you’re on a team (on a Business or Enterprise account), you can choose to hop the queue and have yours run next. People will be able to see it was you who did it so, ya know, ya gotta have a little courtesy.

The post Queue Jumping in Netlify appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

radEventListener: a Tale of Client-side Framework Performance

Css Tricks - Tue, 08/18/2020 - 4:59am

React is popular, popular enough that it receives its fair share of criticism. Yet, this criticism of React isn’t completely unwarranted: React and ReactDOM total about 120 KiB of minified JavaScript, which definitely contributes to slow startup time. When client-side rendering in React is relied upon entirely, it churns. Even if you render components on the server and hydrate them on the client, it still churns because component hydration is computationally expensive.

React certainly has its place when it comes to applications requiring complex state management, but in my professional experience, it doesn’t belong in most scenarios I see it used. When even a bit of React can be a problem on devices slow and fast alike, using it is an intentional choice that effectively excludes people with low-end hardware.

If it sounds like I have a grudge against React, then I must confess that I really like its componentization model. It makes organizing code easier. I think JSX is great. Server rendering is also cool—even if that’s just how we say “send HTML over the network” these days.

Still, even though I happily use React components on the server (or Preact, as is my preference), figuring out when it’s appropriate to use on the client is a bit challenging. What follows are my findings on React performance as I’ve tried to meet this challenge in a way that’s best for users.

Setting the scene

Lately, I’ve been chipping away at an RSS feed app side project called bylines.fyi. This app uses JavaScript on both the back and front end. I don’t think client-side frameworks are horrid things, but I’ve frequently observed two things about the client-side framework implementations I tend to run into in my day-to-day work and research:

  1. Frameworks have the potential to inhibit a deeper understanding of the things they abstract, which is the web platform. Without knowing at least some of the lower level APIs that frameworks rely on, we can’t know what projects benefit from a framework, and which projects are better off without one.
  2. Frameworks don’t always provide a clear path toward good user experiences.

You may be able to argue the validity of my first point, but the second point is becoming more difficult to refute. You might remember a little while ago when Tim Kadlec did some research on HTTPArchive about web framework performance, and came to the conclusion that React wasn’t exactly a stellar performer.

Still, I wanted to see if it was possible to use what I thought was best about React on the server while mitigating its ill effects on the client. To me, it makes sense to simultaneously want to use a framework to help to organize my code, but also restrict that framework’s negative impact on the user experience. That required a little experimentation to see what approach would be best for my app.

The experiment

I make sure to render every component I use on the server because I believe that the burden of providing markup should be assumed by the web app’s server, not the user’s device. However, I needed some JavaScript in my RSS feed app in order to get a toggleable mobile nav to work.

This scenario aptly describes what I refer to as simple state. In my experience, a prime example of simple state are linear A to B interactions. We toggle a thing on, and then we toggle it off. Stateful, but simple.

Unfortunately, I often see stateful React components used to manage simple state, which is a trade-off that’s problematic for performance. Though that may be a vague utterance for the moment, you’ll come to find out as you read on. That said, it’s important to emphasize that this is a trivial example, but it’s also a canary. Most developers—I hope—aren’t going to rely solely on React to drive such simple behavior for just one thing on their website. So it’s vital to understand that the results you’re going to see are intended to inform you on how you architect your applications, and how the effects of your framework choices could scale when it comes to runtime performance.

The conditions

My RSS feed app is still in development. It contains no third party code, which makes for easy testing in a quiet environment. The experiment I conducted compared the mobile nav toggle behavior across three implementations:

  1. A stateful React component (React.Component) rendered on the server and hydrated on the client.
  2. A stateful Preact component, also server-rendered and hydrated on the client.
  3. A server-rendered stateless Preact component which was not hydrated. Instead, regular ol’ event listeners provide the mobile nav functionality on the client.

Each of these scenarios were measured across four distinct environments:

  1. A Nokia 2 Android phone on Chrome 83.
  2. A ASUS X550CC laptop from 2013 running Windows 10 on Chrome 83.
  3. An old first generation iPhone SE on Safari 13.
  4. A new second generation iPhone SE, also on Safari 13.

I believe this range of mobile hardware will be illustrative of performance across a broad spectrum of device capabilities, even if it’s slightly heavy on the Apple side.

What was measured

I wanted to measure four things for each implementation in each environment:

  1. Startup time. For React and Preact, this included the time it took to load the framework code as well as hydrating the component on the client. For the event listener scenario, this included only the event listener code itself.
  2. Hydration time. For the React and Preact scenarios, this is a subset of the startup time. Because of issues with remote debugging crashing in Safari on macOS, I couldn’t measure hydration time alone on iOS devices. Event listener implementations incurred zero hydration cost.
  3. Mobile nav open time. This gives us insight into how much overhead frameworks introduce in their abstraction of event handlers, and how that compares to the frameworkless approach.
  4. Mobile nav close time. As it turned out, this was quite a bit less than the cost of opening the menu. I ultimately decided not to include those numbers in this article.

It should be noted that measurements of these behaviors include scripting time only. Any layout, paint, and compositing costs would be in addition to and outside of these measurements. One should take care to remember that those activities compete for main thread time in tandem with scripts that trigger them.

The procedure

To test each of the three mobile nav implementations on each device, I followed this procedure:

  1. I used remote debugging in Chrome on macOS for the Nokia 2. For iPhones, I used Safari’s equivalent of remote debugging.
  2. I accessed the RSS feed app running on my local network on each device to the same page where the mobile nav toggling code could be run. Because of this, network performance was not a factor in my measurements.
  3. Without CPU or network throttling applied, I began recording in the profiler, and reloaded the page.
  4. After page load, I opened the mobile nav and then closed it.
  5. I stopped the profiler, and recorded how much CPU time was involved in each of the four behaviors listed earlier.
  6. I cleared the performance timeline. In Chrome, I also clicked the garbage collection button to free up any memory that may have been tied up by my app’s code from a previous session recording.

I repeated this procedure ten times for each scenario for each device. Ten iterations seemed to get just enough data to see a few outliers while getting a reasonably accurate picture, but I’ll let you decide as we go over the results. If you don’t want a play-by-play of my findings, you can view the results at this spreadsheet and draw your own conclusions, as well as the mobile nav code for each implementation.

The results

I initially wanted to present this information in a graph, but because of the complexity of what I was measuring, I wasn’t certain how to present the results without cluttering the visualization. Therefore, I’ll present the minimum, maximum, median, and average CPU times in a series of tables, all of which effectively illustrate the range of outcomes I encountered in each test.

Google Chrome on Nokia 2

The Nokia 2 is a low-cost Android device with a ARM Cortex-A7 processor. It is not a powerhouse, but rather a cheap and easily obtainable device. Android usage worldwide is currently around 40%, and though Android device specs vary greatly from one device to the next, low-end Android devices are not rare. This is a problem we must recognize as being one of both wealth and proximity to fast network infrastructure.

Let’s see what the numbers look like for startup cost.

Startup time React ComponentPreact ComponentaddEventListener CodeMin137.2131.234.69Median147.7642.065.99Avg162.7343.166.81Max280.8162.0312.06

I believe it says something that it takes, on average, over 160 ms to parse and compile React, and hydrate one component. To remind you, startup cost in this case includes the time it takes for the browser to evaluate the scripts needed for the mobile nav to work. For React and Preact, it also includes hydration time, which in both cases can contribute to the uncanny valley effect we sometimes experience during startup.

Preact fares much better, taking around 73% less time than React, which makes sense considering how tiny Preact is at 10 KiB sans compression. Still, it’s important to note that the frame budget in Chrome is about 10 ms to avoid jank at 60 fps. Janky startup is as bad as janky anything else, and is a factor when calculating First Input Delay. All things considered, though, Preact performs relatively well.

As for the addEventListener implementation, it turns out that parse and compile time for a tiny script with no overhead is unsurprisingly very low. Even at the sampled maximum time of 12ms, you’re barely in the outer ring of the Janksburg Metropolitan Area. Now let’s have a look at hydration cost alone.

Hydration time React ComponentPreact ComponentMin67.0419.17Median70.3326.91Avg74.8726.77Max117.8644.62

For React, this is still in the vicinity of Yikes Peak. Sure, a median hydration time of 70 ms for one component isn’t a big deal, but think about how hydration cost scales when you have a bunch of components on the same page. It’s no surprise that the React websites I test on this device feel more like endurance trials than user experiences.

Preact’s hydration times are quite a bit less, which makes sense because Preact’s documentation for its hydrate method states that it “skips most diffing while still attaching event listeners and setting up your component tree.” Hydration time for the addEventListener scenario isn’t reported, because hydration isn’t a thing outside of VDOM frameworks. Next, let’s take a peek at the time it takes to open the mobile nav.

Mobile nav open time React ComponentPreact ComponentaddEventListener CodeMin30.8911.943.94Median43.6214.296.14Avg43.1614.666.12Max53.1920.468.60

I find these figures a bit surprising, because React commands almost seven times as much CPU time to execute an event listener callback than an event listener you could register yourself. This makes sense, as React’s state management logic is necessary overhead, but one has to wonder if it’s worth it for simplistic, linear interactions.

On the other hand, Preact manages to limit its overhead on event listeners to the point where it takes “only” twice as much CPU time to run an event listener callback.

CPU time involved in closing the mobile nav was quite a bit less at an average approximate time of 16.5 ms for React, with Preact and bare event listeners coming in at around 11 ms and 6 ms, respectively. I’d post the full table for the measurements on closing the mobile nav, but we have a lot left to sift through yet. Besides, you can check out those figures yourself in the spreadsheet I referred to earlier on.

A quick note on JavaScript samples

Before moving onto the iOS results, one potential sticking point I want to address is the impact of disabling JavaScript samples in Chrome DevTools when recording sessions on remote devices. After compiling my initial results, I wondered if the overhead of capturing entire call stacks was skewing my results, so I re-tested the React scenario samples disabled. As it turned out, this setting had no significant impact on the results.

Additionally, because the call stacks were truncated, I was unable to measure component hydration time. Average startup cost with samples disabled vs. samples enabled was 160.74 ms and 162.73 ms, respectively. The respective median figures were 157.81 ms and 147.76 ms. I would consider this squarely “in the noise.”

Safari on 1st Generation iPhone SE

The original iPhone SE is a great phone. Despite its age, it still enjoys devoted ownership owing to its more comfortable physical size. It shipped with the Apple A9 processor which is still a solid contender. Let’s see how it did on startup time.

Startup time React ComponentPreact ComponentaddEventListener CodeMin32.067.630.81Median35.609.421.02Avg35.7610.151.07Max39.1816.941.56

This is a big improvement from the Nokia 2, and it’s illustrative of the gulf between low-end Android devices and even older Apple devices with significant mileage.

React performance still isn’t great, but Preact gets us within a typical frame budget for Chrome. Event listeners alone, of course, are blazingly fast, leaving plenty of room in the frame budget for other activity.

Unfortunately, I couldn’t measure hydration times on the iPhone, as the remote debugging session would crash every time I would traverse the call stack in Safari’s DevTools. Considering that hydration time was a subset of the overall startup cost, you can expect that it probably accounts for at least half of the startup time if results from the Nokia 2 trials are any indicator.

Mobile nav open time React ComponentPreact ComponentaddEventListener CodeMin16.915.450.48Median21.118.620.50Avg21.0911.070.56Max24.2019.791.00

React does alright here, but Preact seems to handle event listeners a bit more efficiently. Bare event listeners are lightning fast, even on this old iPhone.

Safari on 2nd Generation iPhone SE

In mid-2020, I picked up the new iPhone SE. It has the same physical size as an iPhone 8 and similar phones, but the processor is the same Apple A13 used in the iPhone 11. It is very fast for its relatively low $400 USD retail price. Given such a beefy processor, how does it deal?

Startup time React ComponentPreact ComponentaddEventListener CodeMin20.265.190.53Median22.206.480.69Avg22.026.360.68Max23.677.180.88

I guess at some point there are diminishing returns when it comes to the relatively small workload of loading a single framework and hydrating one component. Things are a little faster on a 2nd generation iPhone SE than its first generation variant in some cases, but not terribly so. I’d imagine that this phone would tackle larger and sustained workloads better than its predecessor.

Mobile nav open time React ComponentPreact ComponentaddEventListener CodeMin13.1512.060.49Median16.4112.570.53Avg16.1112.630.56Max17.5113.260.78

Slightly better React performance here, but not much else. Strangely, Preact seems to take longer on average to open the mobile nav on this device than its first generation counterpart, but I’ll chalk that up to outliers skewing a relatively small dataset. I certainly would not assume the first generation iPhone SE is a faster device based on this.

Chrome on a dated Windows 10 Laptop

Admittedly, these were the results I was most excited to see: how does an ASUS laptop from 2013 with Windows 10 and an Ivy Bridge i5 of the day handle this stuff?

Startup time React ComponentPreact ComponentaddEventListener CodeMin43.1513.111.81Median45.9514.542.03Avg45.9214.472.39Max48.9816.493.61

The numbers aren’t bad when you consider that the device is seven years old. The Ivy Bridge i5 was a good processor in its day, and when you couple that with the fact that it’s actively cooled (rather than passively cooled as mobile device processors are), it probably doesn’t run into thermal throttling scenarios as often as mobile devices.

Hydration time React ComponentPreact ComponentMin17.757.64Median23.558.73Avg23.128.72Max26.259.55

Preact does well here, and manages to stay within Chrome’s frame budget, and is almost three times faster than React. Things could look quite a bit different if you’re hydrating ten components on the page at startup time, possibly even in Preact.

Mobile nav open time Preact ComponentaddEventListener CodeMin6.062.500.88Median10.433.090.97Avg11.243.211.02Max14.444.341.49

When it comes to this isolated interaction, we see performance that’s similar to high-end mobile devices. It’s encouraging to see such an old laptop still keep up reasonably well. That said, this laptop’s fan spins up often when browsing the web, so active cooling is probably this device’s saving grace. If this device’s i5 was passively cooled, I suspect its performance might drop.

Shallow call stacks for the win

It’s not a mystery as to why it takes React and Preact longer to start up than it does for a solution that eschews frameworks altogether. Less work equals less processing time.

While I think startup time is crucial, it’s probably inevitable that you’ll trade some amount of speed for a better developer experience. Though I’d strenuously argue that we tend to trade too much toward developer experience than user experience far too often.

The dragons also lie in what we do after the framework loads. Client-side hydration is something that I think is far too often abused, and can sometimes be completely unnecessary. Every time you hydrate a component in React, this is what you’re throwing at the main thread:

Recall that on the Nokia 2, the minimum time I measured for hydrating the mobile nav component was about 67 ms. In Preact—for which you’ll see the hydration call stack below—takes about 20 ms.

These two call stacks aren’t to the same scale, but Preact’s hydration logic is simplified, probably because “most diffing is skipped” as Preact’s documentation states. There’s quite a bit less going on here. When you get closer to the metal by using addEventListener instead of a framework, you can get even faster.

A call stack of event listeners attaching to DOM elements.

Not every situation calls for this approach, but you’d be surprised at what you can accomplish when your tools are addEventListener, querySelector, classList, setAttribute/getAttribute, and so on.

These methods—and many more like them—are what frameworks themselves rely on. The trick is to evaluate what functionality you can safely deliver outside of what the framework provides, and rely on the framework when it makes sense.

A call stack of React firing a click event handler to open a mobile nav.

If this were a call stack for, say, making a request for API data on the client and managing the complex state of the UI in that situation, I’d find this cost more acceptable. Yet, it’s not. We’re just making a nav appear on the screen when the user taps a button. It’s like using a bulldozer when a shovel would be a better fit for the job.

Preact at least strikes the middle ground:

A call stack of Preact firing a click event handler to open a mobile nav.

Preact takes about a third of the time to do the same work React does, but on that budget device, it exceeds the frame budget often. This means opening that nav on some devices will animate sluggishly because the layout and paint work may not have enough time to finish without entering long task territory.

A call stack of a bare event listener opening the mobile nav.

In this case, an event listener is what I needed. It gets the job done seven times faster on that budget device than React.

Conclusion

This is not a React hit piece, but rather a plea for consideration of how we do our work. Some of these performance pitfalls can be avoided if we take care to evaluate what tools make sense for the job, even for apps with a great deal of complex interactivity. To be fair to React, these pitfalls likely exist in many VDOM frameworks, because the nature of them adds necessary overhead to manage all sorts of things for us.

Even if you’re working on something that doesn’t call for React or Preact, but you want to take advantage of componentization, consider keeping it all on the server to start with. This approach means you can decide if and when it’s appropriate to extend functionality to the client—and how you’ll do that.

In the case of my RSS feed app, I can manage this by putting lightweight event listener code in the entry point for that page of the app, and using an asset manifest to put the minimal amount of script necessary in order for each page to work.

Now let’s suppose that you have an app that truly needs what React provides. You have complex interactivity with lots of state. Here are some things you can do to try and get things going a bit faster.

  1. Check all of your stateful components—that is, any component which extends React.Component—and see if they can be refactored as stateless components. If a component doesn’t use lifecycle methods or state, you can refactor it to be stateless.
  2. Then, if possible, avoid sending JavaScript to the client for those stateless components, as well as hydrating them. If a component is stateless, only render it on the server. Prerender components when possible to minimize server response time, because server rendering has its own performance pitfalls.
  3. If you have a stateful component with simple interactivity, consider prerendering/server-rendering that component, and replace its interactivity with framework-independent event listeners. This avoids hydration entirely, and user interactions won’t have to filter through the framework’s state management logic.
  4. If you must hydrate stateful components on the client, consider lazily hydrating components that aren’t near the top of the page. An Intersection Observer that triggers a callback works very well for this, and will give more main thread time to critical components on the page.
  5. For lazily-hydrated components, assess whether you can schedule their hydration during main thread idle time with requestIdleCallback.
  6. If possible, consider switching from React to Preact. Given how much faster it runs than React on the client, it’s worth having the discussion with your team to see if this is possible. The latest version of Preact is nearly 1:1 with React for most things, and preact/compat does a great job of easing this transition. I don’t think Preact is a panacea for performance, but it gets you closer to where you need to be.
  7. Consider adapting your experience to users with low device memory. navigator.deviceMemory (available in Chrome and derived browsers) enables you to change the user experience for users on devices with little memory. If someone has such a device, it’s probable that its processor isn’t so fast either.

Whatever you decide to do with this information, the thrust of my argument is this: if you use React or any VDOM library, you should spend some time investigating its impact on an array of devices. Get a cheap Android device and see how your app feels to use. Contrast that experience with your high-end devices.

Most of all, don’t follow “best practices” if the result is that your app effectively excludes a part of your audience that can’t afford high end devices. Keep pushing for everything to be faster. If our daily work is any indication, this is an endeavor that will keep you busy for some time to come, but that’s OK. Making the web faster makes the web more accessible in more places. Making the web more accessible makes the web more inclusive. That’s the really good work we should all be trying our best to do.

I’d like to express my gratitude to Eric Bailey for his editorial feedback this piece, as well as the CSS-Tricks staff for their willingness to publish it.

The post radEventListener: a Tale of Client-side Framework Performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The New CSS-Tricks Video Intro by dina Amin

Css Tricks - Mon, 08/17/2020 - 12:38pm

You know we do video screencasts, right? It’s not, like, super regular, but I have done them for a long time, still like doing them, and plan to keep doing them. I publish them here, but you can subscribe over on YouTube as well.

I’ve had a couple of different custom video intro animations over the years, always done by someone far more handy with that kind of thing than I am. When I asked around in May this year, I got some good leads, but none better than Matthias paging Marc, and then Marc helping me out with an introduction to dina Amin.

One look at dina’s work and it’s an obvious: yes! She does stop-motion and a lot of breakin’ stuff:

Just one small example, check out the show reel too!

We chatted back and forth, scoping out the project. She says:

I worked together with Assem Kamal on a new intro for CSS-Tricks YouTube channel. We wanted to make something very short yet interesting so that audiences on YouTube don’t skip the intro or get bored if they watch a couple of CSS-Tricks videos back to back.

She researched and asked a bunch of questions. I like how she was very aware of the role an intro like this plays in a video, especially tutorials. Can’t be too long. Can’t be annoying in any way. It has to have enough detail that it’s almost fun to see multiple times.

The old video started with a keyboard:

This is the old one. Love it the spirit but could use a freshening up. pic.twitter.com/ZfkDHaFZYI

— Chris Coyier (@chriscoyier) May 26, 2020

We started with an Apple keyboard, because we wanted to keep something from the original intro that Chris’ audience would relate to, and most importantly because I wanted to take the keyboard apart!

https://www.instagram.com/p/CDzHSkuHyYl/

“Did we cut up that keyboard?!” Yes, we did. It was too easy to find multiple broken Apple keyboards, it’s a very well-engineered product and it all comes together beautifully with minimum parts, but only Apple can fix this product. You can’t just get your screw kit out and open this up and fix one flawed button connection. So a lot of these keyboards are thrown away because it’s too expensive to fix in countries like Egypt. We got our hands on three keyboards and we cut up one as we animated and used different keyboard buttons from the other two to make the buttons stretch.

It was fun seeing some behind-the-scenes stuff in her Instagram Stories:

And another connection from the original is the idea of websites as components and building out layouts. That was just referenced in the original with some background sketches and now comes to life with paper prototypes in this version.

We thought of showing the ‘how to make a website’ process in very abstract steps where each step quickly transitions and transforms into the other. Starting with keyboard clicks that turn into code, then design blocks that make up a website, which scrolls to reveal the CSS-Tricks logo.

It’s all done quite literally with stop motion! Hand moving bits around, taking photos, and making those into video.

Once we got the concept approved and our props ready, we spent hours and hours moving little pieces to make all this magic.

Check out a time lapse of the creation!

Ultimately, I got a number of versions in different aspect ratios and sizes, which is wonderful as I can switch it up and use different ones that might be more appropriate in different scenarios. Here’s the main one!

I’ve already been putting these on the start and end of videos like this one.

Thanks, dina and Assam!

The post The New CSS-Tricks Video Intro by dina Amin appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks Chronicle XXXVIII

Css Tricks - Fri, 08/14/2020 - 1:24pm

Hey gang! I’ve been fortunate enough to be a guest in a variety of different here, so I thought it was time for another Chronicle post. You know, those special posts where I round up the random goings-on of things I do off of this site.

I joined Ed & Tom over on A Question of Code.

We cover a lot of ground in this show. Why does having a personal site gives you a massive advantage? (Having your own website puts you ahead of a surprising number of people; it should be table-stakes, but it’s not!) And what does job hunting (and running a job board) look like in the time of COVID? What will working remotely mean for junior devs in the near future?

That reminds me: I gotta update my personal site with these interviews.

Drew and I chatted about Serverless over on the Smashing Podcast.

We’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently?

Ya know, we have that site all about serverless, and we’ve had a good stream of pull requests on it so it stays decently up to date. This stuff only gets more interesting over time. The technology gets better and cheaper and it really can’t be ignored anymore.

Bob and Mendel had me on Do the Woo.

WooCommerce also drifted in and out of Chris’s web life, and recently he took it a bit deeper on his site CSS-Tricks. Although he isn’t deep into the WooCommerce community, he is a huge fan and we can gain useful insights and perspectives from his web experience.

Indeed, we do use WooCommerce around here — especially lately, what with the memberships and posters.

Gerry and I talked about the not-so-great direction that the web and technology is headed in many respects over on the Human-Centered Design podcast.

Gerry is a fascinating guy who does a ton of interesting work that always centers around the biggest and more important ideas out there. One of his recent projects is World Wide Waste and we get into that with him over on ShopTalk as well.

Here’s me as a guest at the Front-end Development South Africa online meetup:

Chris had me on the Self-Made Web Designer podcast (hey, I guess that’s what I am).

The front-end development field is constantly changing and advancing in your career is becoming more and more difficult.

For one, the sheer amount of people in the industry has gone up drastically. Not to mention, what it actually means to be a front-end developer worthy of a tech job is getting more and more complicated.

So the question becomes, “How do you establish yourself as a fron-tend web developer? Then, “How do you grow in your career?”

The post CSS-Tricks Chronicle XXXVIII appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

That’s Just How I Scroll

Css Tricks - Fri, 08/14/2020 - 10:00am

How do you know a page (or any element on that page) scrolls? Well, if it has a scrollbar, that’s a pretty good indication. You might still have to scrapple with your client about “the fold” or whatever, but I don’t think anyone is confused at what a scrollbar is or what it indicates.

But let’s say there is no scrollbar. That’s super common. macOS hides scrollbars by default and only shows them during scroll. Most mobile browsers don’t have scrollbars, even if you attempt to force them with something like overflow: scroll;.

Why does this matter? If you don’t know an area is scrollable, you might miss out on important content or functionality.

I regularly think about the Perfectly Cropped story from Tyler Hall. There is a screen on iOS that has important functionality you need to scroll down to, but there is no indicator whatsoever that you can scroll there.

The result of that was Tyler’s mom literally not being able to find functionality she was used to. Not great.

There is an elaborate way to detect visible scrollbars and force them to be visible, but something about that rubs me the wrong way. It doesn’t honor a user’s preference (assuming it is the user’s preference), requires DOM manipulation tests, and uses vendor-prefixed CSS (which will probably live a long time, but has been standardized now, so maybe not forever).

I enjoy these approaches and by Chris Smith and his thinking:

CodePen Embed Fallback

My favorite are the shadow-based techniques. To me an inset shadow is a really clear indicator, as it makes it appear that content is flowing underneath and the shadow follows an edge that as a hint that you can scroll in that direction. Plus, you’ve got CSS control there so I’d think it could match whatever UI situation you’re in fairly easily.

It should be known though that it can be done entirely in CSS though, no JavaScript, and is one of the great CSS tricks.

The post That’s Just How I Scroll appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What I Learned by Fixing One Line of CSS in an Open Source Project

Css Tricks - Fri, 08/14/2020 - 4:16am

I was browsing the Svelte docs on my iPhone and came across a blaring UI bug. The notch in the REPL knob was totally out of whack. I’m always looking to contribute to open source, and I thought this would be a quick and easy fix. Turns out, there was a lot more to it than just changing one line of CSS. 

Replicating, debugging, setting up the local environment was interesting, difficult, and meaningful.

The issue

I opened my browser DevTools, thinking I’d see the same issue in the phone view. But, the bug wasn’t there. Now this is a seriously tricky CSS problem.

&#x1f4a1; What I learned

If you’re using Chrome on iOS as your browser, you’re still using Safari’s renderer. From Wikipedia:

Chrome uses the iOS WebKit – which is Apple’s own mobile rendering engine and components, developed for their Safari browser – therefore it is restricted from using Google’s own V8 JavaScript engine.

This is backed up by caniuse, which provides this note on iPS Safari:

Now it’s clear why the issue wasn’t showing up on my machine but it was showing up on my phone. Different rendering engines! 

Reproduce the issue locally

I pulled down the project and ran it locally. I confirmed it was still an issue by running the local code in a simulator as well as on my actual iPhone. Safari on macOS has an easy way to open up DevTools instances of each one.

This provides access to a console just like you would in the browser but for iOS Safari.  I’m not going to lie, Apple’s developer experience is top notch (see what I did there? &#x1f62c;).

I’m able to reproduce the issue locally now.

&#x1f4a1; What I learned

After pulling down the Svelte repo and looking around the code a bit, I noticed the UI and SVGs were being pulled in via a package called @sveltejs/site-kit. Great, now I need my local version of site kit to get pulled into svelte/site so I can see changes and debug the issue.

I needed to point the node_modules in Svelte’s package.json to my local copy of site-kit. This sounded like a Symlink. After looking through the docs without much luck I Googled around and stumbled upon npm-link. That let me see what I was doing!

I can now make local changes to site-kit and see them reflected in the Svelte project.

Solving the issue

Seriously, all this needed was a one-line change:

border: transparent;

But locating where that one line should go was not as straightforward as you’d think. Source maps on the project are still a little rough around the edges and are showing this CSS coming from the Nav.svelte component when it was really coming from another file. That would be another great way to contribute to the project!

Then you search around and learn that this is being handled and you learn a little more about how it’s done. Everything now looks great on mobile and desktop.

That’s all it needed! Let’s rewind

What started as a quick, one-line change became a bit of a journey. I had to:

  • Run the project and component repositories
  • Learn about system linking
  • Contribute documentation about lining to site-kit
  • Learn about different browser renderers
  • Learn how to emulate an iOS Safari browser
  • Learn how to get access to its debugger
  • Find the issue when source maps weren’t working correctly
  • Fix the issue (finally!)

Working on your own, you normally don’t get to deal with issues like this, or have a large complex system you need to build a mental model of and learn. You don’t get to learn from maintainers. Most importantly, you don’t see all of the hard work that goes into building a popular technical product.

When I submitted this idea to CSS-Tricks. Chris said he had recently dealt with a similar situation. Difficult learning is durable learning. Embrace the struggle.

Never stop learning

I grabbed another issue from the Svelte project and now I’m learning about CSSStyleSheet because there’s another issue (I think), with how Safari handles keyframe animations within stylemanager.ts. And so the learning continues down paths I never would have trod in my day-to-day work.

When something breaks, enjoy the journey of learning the system. You’ll gain valuable insights into why that thing broke and what can be done to fix it. That’s one of the awesome benefits of contributing to open source projects and why I’d encourage you to do the same.

The post What I Learned by Fixing One Line of CSS in an Open Source Project appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Stacked Cards with Sticky Positioning and a Dash of Sass

Css Tricks - Thu, 08/13/2020 - 4:45am

The other day, I spotted this particularly lovely bit from Corey Ginnivan’s website where a collection of cards stack on top of one another as you scroll.

I started wondering how much JavaScript this would involve and how you’d go about making it when I realized — ah! — this must be the work of position: sticky and a tiny amount of Sass. So, without diving into how Corey did this, I decided to take a crack at it myself.

First up, some default styles for the cards:

body { background: linear-gradient(#e8e8e8, #e0e0e0); } .wrapper { margin: 0 auto; max-width: 700px; } .card { background-color: #fff; border: 1px solid #ccc; border-radius: 10px; box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.1); color: #333; padding: 40px; }

Next, we need to make each card sticky to the top of the wrapper. We can do that like this:

.card { position: sticky; top: 10px; // other card styles }

And that leaves us with this:

But how do we get each of these elements to look like a stack on top of one another? Well, we can use some fancy Sass magic to fix the position of each card. First we’ll loop over every card element and then change the value with each iteration:

@for $i from 1 through 8 { .card:nth-child(#{$i}n) { top: $i * 20px; } }

Which results in this demo, which is totally charming, if I do say so myself:

CodePen Embed Fallback

And there we have it! We could make a few visual changes here to improve things. For example, the box-shadow and color of each card, just like Corey’s example. But I wanted to keep experimenting here. What if we switch the order of the cards and made them horizontal instead?

We already do that on this very website:

After experimenting for a little bit I changed the order of the cards with flexbox and made each item slide in from right to left:

.wrapper { display: flex; overflow-x: scroll; } .card { height: 60vh; min-width: 50vw; position: sticky; top: 5vh; left: 10vw; }

But I also wanted to make each of the cards come in at different angles so I updated the Sass loop with the random function:

@for $i from 1 through 8 { .card:nth-child(#{$i}n) { left: $i * 20px; left: random(200) + $i * 1px; top: random(130) + $i * 1px; transform: rotate(random(3) - 2 * 1deg); } }

That’s the bulk of the changes and that results in the following:

CodePen Embed Fallback

Pretty neat, eh? I love position: sticky; so much.

The post Stacked Cards with Sticky Positioning and a Dash of Sass appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 2: Browsers

Css Tricks - Wed, 08/12/2020 - 11:10am
Previously in web history…

Sir Tim Berners-Lee creates the technologies behind the web — HTML, HTTP, and the URL which blend hypertext with the Internet — with a small team at CERN. He convinces the higher-ups in the organizations to put the web in the public domain so anyone can use it.

Dennis Ritchie had a problem.

He was working on a new, world class operating system. He and a few other colleagues were building it from the ground up to be simple and clean and versatile. It needed to run anywhere and it needed to be fast.

Ritchie worked at Bell Labs. A hotbed of innovation, in the 60s, and 70s, Bell employed some of the greatest minds in telecommunications. While there, Ritchie had worked on a time-sharing project known as Multics. He was fiercely passionate about what he saw as the future of computing. Still, after years of development and little to show for it, Bell eventually dropped the project. But Ritchie and a few of his colleagues refused to let the dream go. They transformed Multics into a new operating system adaptable and extendable enough to be used for networked time sharing. They called it Unix.

Ritchie’s problem was with Unix’s software. More precisely, his problem was with the language the software ran on. He had been writing most of Unix in assembly code, quite literally feeding paper tape into the computer, the way it was done in the earliest days of computing. Programming directly in assembly — being “close to the metal” as some programmers refer to it — made Unix blazing fast and memory efficient. The process, on the other hand, was laborious and prone to errors.

Ritchie’s other option was to use B, an interpreted programming language developed by his co-worker Ken Thompson. B was much simpler to code with, several steps abstracted from the bare metal. However, it lacked features Ritchie felt were crucial. B also suffered under the weight of its own design; it was slow to execute and lacked the resilience needed for time-sharing environments.

Ritchie’s solution was to chose neither. Instead, he created a compiled programming language with many of the same features as B, but with more access to the kinds of things you could expect from assembly code. That language is called C.

By the time Unix shipped, it had been fully rewritten in C, and the programming language came bundled in every operating system that ran on top of it, which, as it turned out, was a lot of them. As more programmers tried C, they adapted to it quickly. It blended, as some might say, perfectly abstract functions and methods for creating predictable software patterns with the ability to get right down to the metal if needed. It isn’t prescriptive, but it doesn’t leave you completely lost. Saron Yitabrek, host of the Command Heroes podcast, describes C as “a nearly universal tool for programming; just as capable on a personal computer as it was on a supercomputer.”

C has been called a Swiss Army language. There is very little it can’t do, and very little that hasn’t been done with it. Computer scientist Bill Dally once said, “It set the tone for the way that programming was done for several decades.” And that’s true. Many of the programming paradigms developed in the latter half of the 20th century originated in C. Compilers were developed beyond Unix, available in every operating system. Rob Pike, a software engineer involved in the development of Unix, and later Go, has a much simpler way of putting it. “C is a desert island language.”

Ritchie has a saying of his own he was fond of repeating. “C has all the elegance and power of assembly language with all the readability and maintainability of… assembly language.” C is not necessarily everyone’s favorite programming language, and there are plenty of problems with it. (C#, created in the early 2000s, was one of many attempts to improve it.) However, as it proliferated out into the world, bundled in Unix-like operating systems like X-Windows, Linux, and Mac OSX, software developers turned to it as a way to speak to one another. It became a kind of common tongue. Even if you weren’t fluent, you could probably understand the language conversationally. If you needed to bundle up and share a some code, C was a great way to do it.

In 1993, Jean-François Groff and Sir Tim Berners-Lee had to release a package with all of the technologies of the web. It could be used to build web servers or browsers. They called it libwww, and released it to the public domain. It was written in C.

Think about the first time you browsed the web. That first webpage. Maybe it was a rich experience, filled with images, careful design and content you couldn’t find anywhere else. Maybe it was unadorned, uninteresting, and brief. No matter what that page was, I’d be willing to bet that it had some links. And when you clicked that link, there was magic. Suddenly, a fresh page arrives on your screen. You are now surfing the web. And in that moment you understand what the web is.

Sir Tim Berners-Lee finished writing the first web browser, WorldWideWeb, in the final days of 1990. It ran on his NeXT machine, and had read and write capabilities (the latter of which could be used to manage a homepage on the web). The NeXTcube wasn’t the heaviest computer you’ve ever seen, but it was still a desktop. That didn’t stop Berners-Lee from lugging it from conference to conference so he could plug it in and show people the web.

Again and again, he ran into the same problem. It will seem obvious to us now when considering the difficulty of demonstrating a globally networked hypertext application running on a little-used operating system (NeXT) on a not-widely-owned computer (NeXT Computer System) alone at a conference without the Internet. The problem came after the demo with the inevitable question: how can I start using it? The web lacks its magic if you can’t connect to the network yourself. It’s entirely useless isolated on a single computer. To make the idea click, Berners-Lee need to get everybody surfing the web. And he couldn’t very well lend his computer out to anybody that wanted to use it.

That’s where Nicola Pellow came in. An undergraduate at Leicester Polytechnic, Pellow was still an intern at CERN. She was assigned to Berners-Lee’s and Calliau’s team, so they tasked her with building an interoperable browser that could be installed anywhere. The fact that she had no background in programming (she was studying mathematics) and that she was at CERN as part of an internship didn’t concern her much. Within a couple of months she picked up a bit of C programming and built the Line Mode Browser.

Using the Line Mode Browser today, you would probably feel like a hacker from the 1980s. It was a text-only browser designed to run from a command line terminal. In most cases, just plain white text on a black background, pixels bleeding from edge to edge. Typing out a web address into the browser would bring up that website’s text on the screen. The up and down arrows on a keyboard could be used for navigation. Links were visible as a numbered list, and one could jump from site to site by entering the right number.

It was designed that way for a reason. Its simplicity guaranteed interoperability. The Line Mode Browser holds the unique distinction of being the only browser for many years to be platform-agnostic. It could be installed anywhere, on just about any computer or operating system. It made getting online easy, provided you knew what to do once you installed it. Pellow left CERN a few months after she released the Line Mode Browser. She returned after graduation, and helped build the first Mac browser.

Almost soon as Pellow left, Berners-Lee and Cailliau wrangled another recruit. Jean-François Groff was working at CERN, one office over. A programmer for years, Groff had written the French translation of the official C Programming Guide by Brian Kernighan and the language’s creator, Dennis Ritchie. He was working on a bit of physics software for UNIX systems when he got a chance to see what Berners-Lee was working on.

Not everybody understood what the web was going for. It can be difficult to grasp without the worldwide picture we have today. Groff was not one of those people. He longed for something just like the web. He understood perfectly what the web could be. Almost as soon as he saw a demo, he requested a transfer to the team.

He noticed one problem right away. “So this line mode browser, it was a bit of a chicken and egg problem,” he once described in an interview, “because to use it, you had to download the software first and install it and possibly compile it.” You had to use the web to download a web browser, but you needed a web browser to use the web. Groff found a clever solution. He built a simple mechanism that allowed users to telnet in to the NeXT server and browse the web using its built-in Line Mode Browser. So anyone in the world could remotely access the web without even needing to install the browser. Once they were able to look around, Groff hoped, they’d be hooked.

But Groff wanted to take it one step further. He came from UNIX systems, and C programming. C is a desert island language. Its versatility makes it invaluable as a one-size-fits-all solution. Groff wanted the web to be a desert island platform. He wanted it to be used in ways he hadn’t even imagined yet, ways that scientists at research institutions couldn’t even fathom. The one medium you could do anything with. To do that, he would need to make the web far more portable.

Working alongside Berners-Lee, Groff began pulling out the essential elements of the NeXT browser and porting them to the C programming language. Groff chose C not only because he was familiar with it, but because he knew most other programmers would be as well. Within a few months, he had built the libwww package (its official title would come a couple of years later). The libwww package was a set of common components for making graphical browsers. Included was the necessary code for parsing HTML, processing HTTP requests and rendering pages. It also provided a starting point for creating browser UI, and tools for embedding browser history and managing graphical windows.

Berners-Lee announced the web to the public for the first time on August 7, 1991. He posted a brief description along with a simple note:

If you’re interested in using the code, mail me. It’s very prototype, but available by anonymous FTP from info.cern.ch. It’s copyright CERN but free distribution and use is not normally a problem.

If you were to email Sir Tim Berners-Lee, he’d send you back the libwww package.

By November of 1992, the library had fully matured into a set of reusable tools. When CERN put the web in the public domain the following year, its terms included the libwww package. By 1993, anyone with a bit of time on their hands and a C compiler could create their own browser.

Before he left CERN to become one of the first web consultants, Groff did one final thing. He created a new mailing list, called www-talk, for a new generation of browser developers to talk shop.

On December 13, 1991 — almost a year after Berners-Lee had put the finishing touches on the first ever browser — Pei-Yuan Wei posted to the www-talk mailing list. After a conversation with Berners-Lee, he had built a browser called ViolaWWW. In a few months, it would be the most popular of the early browsers. In the middle of his post, Wei offhandedly — in a tone that would come off as bragging if it weren’t so sincere — mentioned that the browser build was a one night hack.

A one night hack. Not even Berners-Lee or Pellow could pull that off. Wei continued the post with the reasons he was able to get it up and running so quickly. But that nuance would be lost to history. What programmers would remember is that the it only took one day to build a browser. It was “hacked” together and shipped to the world, buggy, but usable. That phrase would set the tone and pace of browser development for at least the next decade. It is arguably the dominant ideology among browser makers today.

The irony is the opposite was true. ViolaWWW was the product of years of work that simply culminated in a single night. Wei is a great software programmer. But he also had all the pieces he needed before the night even started.

Pei-Yuan Wei has made a few appearances on the frontlines of web history. Apart from the ViolaWWW browser, he was hired by Dale Dougherty to work on an early version of GNN.com, the first commercial website. He was at a meeting of web pioneers the day the idea of the W3C was first discussed. In 2012, he was on the list of witnesses to speak in court to the many dangers of the Stop Online Privacy Act (SOPA). In the web’s early history Wei was a persistent presence.

Wei was a student at UC Berkley in the early 90s. It was HyperCard that set off his fascination with hypertext software. HyperCard was an application built for the Mac operating system in the late 80s. It allowed its users to create stacks of virtual “cards,” each with a bit of info. Users could then connect these cards however they wanted, and quickly sort, search, and navigate through their stacks. People used it to organize their recipes, replace their Rolodexes, organize research notes, and a million other things. HyperCard is the kind of software that attracts a person who demands a certain level of digital meticulousness, the kind of user that organizes their desktop folders into neat sections and precisely tags their data. This core group of power users manipulated the software using its built-in scripting language, HyperScript, to extend it to new heights.

Wei had just glimpsed Hypercard before he knew he needed to use it. But he was on an X-Windows computer, and HyperCard could only run on a Mac. Wei was not to be deterred. Instead of buying a Mac computer (an expensive but reasonable solution the problem) Wei began to write software of his own. He even went one step further. Wei began by creating his very own programming language. He called it Viola, and the first thing he built with it was a HyperCard clone.

Wei felt that the biggest limitation of HyperCard — and by extension his own hypertext software — was that it lacked access to a network. What good was data if it was locked up inside of a single computer? By the time he had reached that conclusion, it was nearing the end of 1991, around the time he saw a mention of the World Wide Web. So one night, he took Viola, combined it with libwww, and built a web browser. ViolaWWW was officially released.

ViolaWWW was built so quickly because most of it was already done by the time Wei found out about the web project. The Viola programming language was in the works for a couple of years at that point. It had already been built to accept hyperlinks and hypermedia for the HyperCard clone. It had been built to be extendable to other possible applications. Once Wei was able to pick apart libwww, he ported his software to read HTML, which itself was still a preposterously simple language. And that piece, the final tip of the iceberg, only took him a single night.

ViolaWWW would be the site of a lot of experimentation on the early web. Wei was the first to include an early version of stylesheets. He added a bookmarking function. The browser supported forms and embedded media. In a prescient move, Wei also included downloadable applets, allowing fairly advanced applications running inside of the browser. This became the template for what would eventually become Java applets.

For X-Windows users, ViolaWWW was the most popular browser on the market. Until the next thing came along.

Releasing a browser in the early 90s was almost a rite of passage. There was a useful exercise in downloading the libwww package and opening it up in your text editor. The web wasn’t all that complicated: there was a bit of code for rendering HTML, and processing HTTP requests from web servers (or other origins, like FTP or Gopher). Programmers of the web used a browser project as a way of getting familiar with its features. It was kind of like the “Hello World” of the early web.

In June of 1993, there were 130 websites in the entire world. There was easily a dozen browsers to chose from. That’s roughly one browser for every ten websites.

This rapid development of browsers was driven by the nature of innovation in the web community. When Berners-Lee put the web in the public domain, he did more than just give it to the world. He put openness at the center of its ideology. It would take five years — with the release of Netscape — for the web to get its first commercial browser. Until then, the “browser makers” were a small community of programmers talking things out the www-talk mailing list trying to make web browsing feel as revolutionary as they wanted it to be.

Some of the earliest projects ported one browser to another operating system. Occasionally, one of the browser makers would spontaneously release something that now feels essential. The first PDF rendering inside of a browser window was a part of the Midas browser. HTML tables were introduced and properly laid out in another called Arena. Tabbed browsing was a prominent feature in InternetWorks. All of these features were developed before 1995.

Most early browsers have faded into obscurity. But the people behind them didn’t. Counted among the earliest browser makers are future employees at Netscape, members of the W3C and the web standards movement, the inventor of cookies (and the blink tag), and the creators of some of the most important websites of the early web.

Of course, no one knew that at the time. To most of the creators, it was simply an exercise in making something cool they could pass along to their Internet friends.

The New York Times introduced its readers to the web on December 8, 1993. “Think of it as a map to the buried treasures of the Information Age,” read the first line. But the “map” the writer was referring to — one he would spend the first half of the article describing — wasn’t the World Wide Web; it was its most popular browser. A browser called Mosaic.

Mosaic was created, in part, by Marc Andreessen. Like many of the early web pioneers, Andreessen is a man of lofty ambition. He is drawn to big ideas and grand statements (he once said that software will “eat the world”). In college, he was known for being far more talkative than your average software engineer, chatting it up about the next bing thing.

Andreessen has had a decades-long passion for technology. Years later, he would capture the imagination of the public with the world’s first commercial browser: Netscape Navigator. He would grace the cover of Time magazine. He would become a cornerstone of Silicon Valley, define its rapid “ship first, think later” ethos for years, and seek and capture his fortune in the world of venture capital.

But Mosaic’s story does not begin with a commanding legend of Silicon Valley overseeing, for better or worse, the future of technology. It begins with a restless college student.

When Sir Tim Berners-Lee posted the initial announcement about the web, about a year before the article in The New York Times, Andreessen was an undergraduate student at the University of Illinois. While he attended school he worked at the university-affiliated computing lab known as the National Center for Supercomputing Applications (NCSA). NCSA occupied a similar space as ARPA in that they both were state-sponsored projects without an explicit goal other than to further the science of computing. If you worked at NCSA, it was possible to move from project to project without arising too much suspicion from the higher ups.

Andreessen was supposed to be working on visualization software, which he had found a way to run mostly on auto-pilot. In his spare time, Andreessen would ricochet around the office listening to everyone about what it was they were interested in. It was during one of those sessions that a colleague introduced him to the World Wide Web. He was immediately taken aback. He downloaded the ViolaWWW browser, and within a few days he had decided that the web would be his primary focus. He decided something else too. He needed to make a browser of his own.

In 1992, browsers could be cumbersome software. They lacked the polish and the conventions of modern browsers without decades of learning to build off of. They were difficult to download and install, often requiring users to make modifications to system files. And early browser makers were so focused on developing the web they didn’t think too much about the visual interface of their software.

Andreessen wanted to build a well-designed, performant, easy-to-install browser while simultaneously building on the features that Wei was adding to the ViolaWWW browser. He pitched his idea to a programmer at NCSA, Eric Bina. “Marc’s a very good salesman,” Bina would later recall, so he joined up.

Taking their cue from the pace of others, Andreessen and Bina finished the first version of the Mosaic browser in just a few weeks. It was available for X Windows computers. To announce the browser, Andreessen posted a download link to the www-talk mailing list, with the message “By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released.” The web got more than just a popular browser. It got its first pitchman.

That first version of the browser was impressive in a somewhat crowded field. To be sure, it had forms and some media support early on. But it wasn’t the best browser, nor was it the most advanced browser. Instead, Andreessen and Bina focused on something else entirely. Mosaic set itself apart because it was the easiest to use. The installation process was simple and the interface was, relatively speaking, intuitive.

The Mosaic browser’s secret weapon was its iteration. Before long, other programmers at NCSA wanted in on the project. They parceled off different operating systems to port the browser to. One team took the Mac, another Windows. By the fall of 1993, a few months after its initial release, Mosaic had feature-paired versions on Mac, Windows and Unix systems, as well as compatible server software.

After that, the pace of development only accelerated. Beta versions were released often and were available to download via FTP. New features were added at a rapid pace and new versions seemed to ship every week. The NCSA Mosaic was fully engaged with the web community, active in the www-talk mailing list, talking with users and gathering bug reports. It was not at all unusual to submit a bug report and hear back a few hours later from an NCSA programmer with a fix.

Andreessen was a particularly active presence, posting to threads almost daily. When the Mosaic team decided they might want to collect anonymous analytics about browser usage, Andreessen polled the www-talk list to see if it was a good idea. When he got a lot of questions about how to use HTML, he wrote a guide for beginners.

When one Mosaic user posted some issues he was having, it led to a tense back and forth between that user and Andreessen. He claimed he wasn’t a customer, and Andreessen shouldn’t care too much about what he thought. Andreessen replied, “We do care what you think simply because having the wonderful distributed beta team that we essentially have due to this group gives us the opportunity to make our product much better than it could be otherwise.” What Andreessen understood better than any of the early browser makers was that Mosaic was a product, and feedback from his users could drive its development. If they kept the feedback loop tight, they could keep the interface clean and bug-free while staying on the cutting edge of new features. It was the programming parable given enough eyeballs, all bugs are shallow come to life in browser development.

There was an electricity to Mosaic development at NCSA. Internal competition fueled OS teams to get features out the door. Sometimes the Mac version would get to something first. Sometimes it was Bina and Andreessen continuing to work on X-Mosaic. “We would get together, middle of the night, and come up with some cool idea — images was an example of that — then we would go off and race and see who would do it first,” creator of the Windows version of Mosaic Jon Mittelhauser later recalled. Sometimes, the features were duds and would hardly go anywhere at all. Other times, as Mittelhauser points out, they were absolutely essential.

In the months after launch, they started to surpass the feature list of even their nearest competitor, ViolaWWW. They added forms support and rich media. They added bookmarks for users to keep track of their links. They even created their own “What’s New” page, updated every single day, which tracked the web’s most popular links. When you opened up Mosaic, the NCSA What’s New page was the first thing you saw. They weren’t just building a browser. They were building a window to the web.

As Mittelhauser points out, it was the <img> tag which became Mosaic’s defining feature. It succeeded in doing two things. The tag was added without input from Sir Tim Berners-Lee or the wider web community. (Andreessen posted a note to www-talk only after it had already been implemented.) So firstly, that set the Mosaic team in a conflict with other browser makers and some parts of the web community that would last for years.

Secondly, it made Mosaic infinitely more popular. The <img> tag allowed for images to be embedded directly inline in the Mosaic browser. People found the web boring to browse. It was sterile, rigid, and scientific. Inline images changed all that. Within a few months, a new class of web designer was beginning to experiment with what was possible with images on the web. In some ways, it was the tag that made the web famous.

The image tag prompted the feature in The New York Times, and a subsequent write-up in Wired. By the time the press got around to talking about the web, Mosaic was the most popular browser and became a surrogate for the larger web world. “Mosaic” was to browsing the web as “Google” is to searching now.

Ultimately, the higher ups got involved. NCSA was not a tech company. They were a supercomputing lab. They came in to help make the Mosaic browser more cohesive, and maybe, more profitable. Licenses were parceled out to a dozen or so companies. Mosaic was bundled into Spry’s Internet in a Box product. It was embedded in enterprise software by the Santa Cruz Operation.

In the end, Mosaic split off into two directions. Pressure from management pushed Andreessen to leave and start a new company. It would be called Netscape. Another of the licensees of the software was a company called Spyglass. They were beginning to have talks with Microsoft. Both would ultimately choose to rewrite the Mosaic browser from scratch, for different reasons. Yet that browser would be their starting point and their products would have lasting implications on the browser market for decades as the world began to see its first commercial browsers.

The post Chapter 2: Browsers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.