Tech News

Chapter 2: Browsers

Css Tricks - Wed, 08/12/2020 - 11:10am
Previously in web history…

Sir Tim Berners-Lee creates the technologies behind the web — HTML, HTTP, and the URL which blend hypertext with the Internet — with a small team at CERN. He convinces the higher-ups in the organizations to put the web in the public domain so anyone can use it.

Dennis Ritchie had a problem.

He was working on a new, world class operating system. He and a few other colleagues were building it from the ground up to be simple and clean and versatile. It needed to run anywhere and it needed to be fast.

Ritchie worked at Bell Labs. A hotbed of innovation, in the 60s, and 70s, Bell employed some of the greatest minds in telecommunications. While there, Ritchie had worked on a time-sharing project known as Multics. He was fiercely passionate about what he saw as the future of computing. Still, after years of development and little to show for it, Bell eventually dropped the project. But Ritchie and a few of his colleagues refused to let the dream go. They transformed Multics into a new operating system adaptable and extendable enough to be used for networked time sharing. They called it Unix.

Ritchie’s problem was with Unix’s software. More precisely, his problem was with the language the software ran on. He had been writing most of Unix in assembly code, quite literally feeding paper tape into the computer, the way it was done in the earliest days of computing. Programming directly in assembly — being “close to the metal” as some programmers refer to it — made Unix blazing fast and memory efficient. The process, on the other hand, was laborious and prone to errors.

Ritchie’s other option was to use B, an interpreted programming language developed by his co-worker Ken Thompson. B was much simpler to code with, several steps abstracted from the bare metal. However, it lacked features Ritchie felt were crucial. B also suffered under the weight of its own design; it was slow to execute and lacked the resilience needed for time-sharing environments.

Ritchie’s solution was to chose neither. Instead, he created a compiled programming language with many of the same features as B, but with more access to the kinds of things you could expect from assembly code. That language is called C.

By the time Unix shipped, it had been fully rewritten in C, and the programming language came bundled in every operating system that ran on top of it, which, as it turned out, was a lot of them. As more programmers tried C, they adapted to it quickly. It blended, as some might say, perfectly abstract functions and methods for creating predictable software patterns with the ability to get right down to the metal if needed. It isn’t prescriptive, but it doesn’t leave you completely lost. Saron Yitabrek, host of the Command Heroes podcast, describes C as “a nearly universal tool for programming; just as capable on a personal computer as it was on a supercomputer.”

C has been called a Swiss Army language. There is very little it can’t do, and very little that hasn’t been done with it. Computer scientist Bill Dally once said, “It set the tone for the way that programming was done for several decades.” And that’s true. Many of the programming paradigms developed in the latter half of the 20th century originated in C. Compilers were developed beyond Unix, available in every operating system. Rob Pike, a software engineer involved in the development of Unix, and later Go, has a much simpler way of putting it. “C is a desert island language.”

Ritchie has a saying of his own he was fond of repeating. “C has all the elegance and power of assembly language with all the readability and maintainability of… assembly language.” C is not necessarily everyone’s favorite programming language, and there are plenty of problems with it. (C#, created in the early 2000s, was one of many attempts to improve it.) However, as it proliferated out into the world, bundled in Unix-like operating systems like X-Windows, Linux, and Mac OSX, software developers turned to it as a way to speak to one another. It became a kind of common tongue. Even if you weren’t fluent, you could probably understand the language conversationally. If you needed to bundle up and share a some code, C was a great way to do it.

In 1993, Jean-François Groff and Sir Tim Berners-Lee had to release a package with all of the technologies of the web. It could be used to build web servers or browsers. They called it libwww, and released it to the public domain. It was written in C.

Think about the first time you browsed the web. That first webpage. Maybe it was a rich experience, filled with images, careful design and content you couldn’t find anywhere else. Maybe it was unadorned, uninteresting, and brief. No matter what that page was, I’d be willing to bet that it had some links. And when you clicked that link, there was magic. Suddenly, a fresh page arrives on your screen. You are now surfing the web. And in that moment you understand what the web is.

Sir Tim Berners-Lee finished writing the first web browser, WorldWideWeb, in the final days of 1990. It ran on his NeXT machine, and had read and write capabilities (the latter of which could be used to manage a homepage on the web). The NeXTcube wasn’t the heaviest computer you’ve ever seen, but it was still a desktop. That didn’t stop Berners-Lee from lugging it from conference to conference so he could plug it in and show people the web.

Again and again, he ran into the same problem. It will seem obvious to us now when considering the difficulty of demonstrating a globally networked hypertext application running on a little-used operating system (NeXT) on a not-widely-owned computer (NeXT Computer System) alone at a conference without the Internet. The problem came after the demo with the inevitable question: how can I start using it? The web lacks its magic if you can’t connect to the network yourself. It’s entirely useless isolated on a single computer. To make the idea click, Berners-Lee need to get everybody surfing the web. And he couldn’t very well lend his computer out to anybody that wanted to use it.

That’s where Nicola Pellow came in. An undergraduate at Leicester Polytechnic, Pellow was still an intern at CERN. She was assigned to Berners-Lee’s and Calliau’s team, so they tasked her with building an interoperable browser that could be installed anywhere. The fact that she had no background in programming (she was studying mathematics) and that she was at CERN as part of an internship didn’t concern her much. Within a couple of months she picked up a bit of C programming and built the Line Mode Browser.

Using the Line Mode Browser today, you would probably feel like a hacker from the 1980s. It was a text-only browser designed to run from a command line terminal. In most cases, just plain white text on a black background, pixels bleeding from edge to edge. Typing out a web address into the browser would bring up that website’s text on the screen. The up and down arrows on a keyboard could be used for navigation. Links were visible as a numbered list, and one could jump from site to site by entering the right number.

It was designed that way for a reason. Its simplicity guaranteed interoperability. The Line Mode Browser holds the unique distinction of being the only browser for many years to be platform-agnostic. It could be installed anywhere, on just about any computer or operating system. It made getting online easy, provided you knew what to do once you installed it. Pellow left CERN a few months after she released the Line Mode Browser. She returned after graduation, and helped build the first Mac browser.

Almost soon as Pellow left, Berners-Lee and Cailliau wrangled another recruit. Jean-François Groff was working at CERN, one office over. A programmer for years, Groff had written the French translation of the official C Programming Guide by Brian Kernighan and the language’s creator, Dennis Ritchie. He was working on a bit of physics software for UNIX systems when he got a chance to see what Berners-Lee was working on.

Not everybody understood what the web was going for. It can be difficult to grasp without the worldwide picture we have today. Groff was not one of those people. He longed for something just like the web. He understood perfectly what the web could be. Almost as soon as he saw a demo, he requested a transfer to the team.

He noticed one problem right away. “So this line mode browser, it was a bit of a chicken and egg problem,” he once described in an interview, “because to use it, you had to download the software first and install it and possibly compile it.” You had to use the web to download a web browser, but you needed a web browser to use the web. Groff found a clever solution. He built a simple mechanism that allowed users to telnet in to the NeXT server and browse the web using its built-in Line Mode Browser. So anyone in the world could remotely access the web without even needing to install the browser. Once they were able to look around, Groff hoped, they’d be hooked.

But Groff wanted to take it one step further. He came from UNIX systems, and C programming. C is a desert island language. Its versatility makes it invaluable as a one-size-fits-all solution. Groff wanted the web to be a desert island platform. He wanted it to be used in ways he hadn’t even imagined yet, ways that scientists at research institutions couldn’t even fathom. The one medium you could do anything with. To do that, he would need to make the web far more portable.

Working alongside Berners-Lee, Groff began pulling out the essential elements of the NeXT browser and porting them to the C programming language. Groff chose C not only because he was familiar with it, but because he knew most other programmers would be as well. Within a few months, he had built the libwww package (its official title would come a couple of years later). The libwww package was a set of common components for making graphical browsers. Included was the necessary code for parsing HTML, processing HTTP requests and rendering pages. It also provided a starting point for creating browser UI, and tools for embedding browser history and managing graphical windows.

Berners-Lee announced the web to the public for the first time on August 7, 1991. He posted a brief description along with a simple note:

If you’re interested in using the code, mail me. It’s very prototype, but available by anonymous FTP from info.cern.ch. It’s copyright CERN but free distribution and use is not normally a problem.

If you were to email Sir Tim Berners-Lee, he’d send you back the libwww package.

By November of 1992, the library had fully matured into a set of reusable tools. When CERN put the web in the public domain the following year, its terms included the libwww package. By 1993, anyone with a bit of time on their hands and a C compiler could create their own browser.

Before he left CERN to become one of the first web consultants, Groff did one final thing. He created a new mailing list, called www-talk, for a new generation of browser developers to talk shop.

On December 13, 1991 — almost a year after Berners-Lee had put the finishing touches on the first ever browser — Pei-Yuan Wei posted to the www-talk mailing list. After a conversation with Berners-Lee, he had built a browser called ViolaWWW. In a few months, it would be the most popular of the early browsers. In the middle of his post, Wei offhandedly — in a tone that would come off as bragging if it weren’t so sincere — mentioned that the browser build was a one night hack.

A one night hack. Not even Berners-Lee or Pellow could pull that off. Wei continued the post with the reasons he was able to get it up and running so quickly. But that nuance would be lost to history. What programmers would remember is that the it only took one day to build a browser. It was “hacked” together and shipped to the world, buggy, but usable. That phrase would set the tone and pace of browser development for at least the next decade. It is arguably the dominant ideology among browser makers today.

The irony is the opposite was true. ViolaWWW was the product of years of work that simply culminated in a single night. Wei is a great software programmer. But he also had all the pieces he needed before the night even started.

Pei-Yuan Wei has made a few appearances on the frontlines of web history. Apart from the ViolaWWW browser, he was hired by Dale Dougherty to work on an early version of GNN.com, the first commercial website. He was at a meeting of web pioneers the day the idea of the W3C was first discussed. In 2012, he was on the list of witnesses to speak in court to the many dangers of the Stop Online Privacy Act (SOPA). In the web’s early history Wei was a persistent presence.

Wei was a student at UCLA Berkley in the early 90s. It was HyperCard that set off his fascination with hypertext software. HyperCard was an application built for the Mac operating system in the late 80s. It allowed its users to create stacks of virtual “cards,” each with a bit of info. Users could then connect these cards however they wanted, and quickly sort, search, and navigate through their stacks. People used it to organize their recipes, replace their Rolodexes, organize research notes, and a million other things. HyperCard is the kind of software that attracts a person who demands a certain level of digital meticulousness, the kind of user that organizes their desktop folders into neat sections and precisely tags their data. This core group of power users manipulated the software using its built-in scripting language, HyperScript, to extend it to new heights.

Wei had just glimpsed Hypercard before he knew he needed to use it. But he was on an X-Windows computer, and HyperCard could only run on a Mac. Wei was not to be deterred. Instead of buying a Mac computer (an expensive but reasonable solution the problem) Wei began to write software of his own. He even went one step further. Wei began by creating his very own programming language. He called it Viola, and the first thing he built with it was a HyperCard clone.

Wei felt that the biggest limitation of HyperCard — and by extension his own hypertext software — was that it lacked access to a network. What good was data if it was locked up inside of a single computer? By the time he had reached that conclusion, it was nearing the end of 1991, around the time he saw a mention of the World Wide Web. So one night, he took Viola, combined it with libwww, and built a web browser. ViolaWWW was officially released.

ViolaWWW was built so quickly because most of it was already done by the time Wei found out about the web project. The Viola programming language was in the works for a couple of years at that point. It had already been built to accept hyperlinks and hypermedia for the HyperCard clone. It had been built to be extendable to other possible applications. Once Wei was able to pick apart libwww, he ported his software to read HTML, which itself was still a preposterously simple language. And that piece, the final tip of the iceberg, only took him a single night.

ViolaWWW would be the site of a lot of experimentation on the early web. Wei was the first to include an early version of stylesheets. He added a bookmarking function. The browser supported forms and embedded media. In a prescient move, Wei also included downloadable applets, allowing fairly advanced applications running inside of the browser. This became the template for what would eventually become Java applets.

For X-Windows users, ViolaWWW was the most popular browser on the market. Until the next thing came along.

Releasing a browser in the early 90s was almost a rite of passage. There was a useful exercise in downloading the libwww package and opening it up in your text editor. The web wasn’t all that complicated: there was a bit of code for rendering HTML, and processing HTTP requests from web servers (or other origins, like FTP or Gopher). Programmers of the web used a browser project as a way of getting familiar with its features. It was kind of like the “Hello World” of the early web.

In June of 1993, there were 130 websites in the entire world. There was easily a dozen browsers to chose from. That’s roughly one browser for every ten websites.

This rapid development of browsers was driven by the nature of innovation in the web community. When Berners-Lee put the web in the public domain, he did more than just give it to the world. He put openness at the center of its ideology. It would take five years — with the release of Netscape — for the web to get its first commercial browser. Until then, the “browser makers” were a small community of programmers talking things out the www-talk mailing list trying to make web browsing feel as revolutionary as they wanted it to be.

Some of the earliest projects ported one browser to another operating system. Occasionally, one of the browser makers would spontaneously release something that now feels essential. The first PDF rendering inside of a browser window was a part of the Midas browser. HTML tables were introduced and properly laid out in another called Arena. Tabbed browsing was a prominent feature in InternetWorks. All of these features were developed before 1995.

Most early browsers have faded into obscurity. But the people behind them didn’t. Counted among the earliest browser makers are future employees at Netscape, members of the W3C and the web standards movement, the inventor of cookies (and the blink tag), and the creators of some of the most important websites of the early web.

Of course, no one knew that at the time. To most of the creators, it was simply an exercise in making something cool they could pass along to their Internet friends.

The New York Times introduced its readers to the web on December 8, 1993. “Think of it as a map to the buried treasures of the Information Age,” read the first line. But the “map” the writer was referring to — one he would spend the first half of the article describing — wasn’t the World Wide Web; it was its most popular browser. A browser called Mosaic.

Mosaic was created, in part, by Marc Andreessen. Like many of the early web pioneers, Andreessen is a man of lofty ambition. He is drawn to big ideas and grand statements (he once said that software will “eat the world”). In college, he was known for being far more talkative than your average software engineer, chatting it up about the next bing thing.

Andreessen has had a decades-long passion for technology. Years later, he would capture the imagination of the public with the world’s first commercial browser: Netscape Navigator. He would grace the cover of Time magazine. He would become a cornerstone of Silicon Valley, define its rapid “ship first, think later” ethos for years, and seek and capture his fortune in the world of venture capital.

But Mosaic’s story does not begin with a commanding legend of Silicon Valley overseeing, for better or worse, the future of technology. It begins with a restless college student.

When Sir Tim Berners-Lee posted the initial announcement about the web, about a year before the article in The New York Times, Andreessen was an undergraduate student at the University of Illinois. While he attended school he worked at the university-affiliated computing lab known as the National Center for Supercomputing Applications (NCSA). NCSA occupied a similar space as ARPA in that they both were state-sponsored projects without an explicit goal other than to further the science of computing. If you worked at NCSA, it was possible to move from project to project without arising too much suspicion from the higher ups.

Andreessen was supposed to be working on visualization software, which he had found a way to run mostly on auto-pilot. In his spare time, Andreessen would ricochet around the office listening to everyone about what it was they were interested in. It was during one of those sessions that a colleague introduced him to the World Wide Web. He was immediately taken aback. He downloaded the ViolaWWW browser, and within a few days he had decided that the web would be his primary focus. He decided something else too. He needed to make a browser of his own.

In 1992, browsers could be cumbersome software. They lacked the polish and the conventions of modern browsers without decades of learning to build off of. They were difficult to download and install, often requiring users to make modifications to system files. And early browser makers were so focused on developing the web they didn’t think too much about the visual interface of their software.

Andreessen wanted to build a well-designed, performant, easy-to-install browser while simultaneously building on the features that Wei was adding to the ViolaWWW browser. He pitched his idea to a programmer at NCSA, Eric Bina. “Marc’s a very good salesman,” Bina would later recall, so he joined up.

Taking their cue from the pace of others, Andreessen and Bina finished the first version of the Mosaic browser in just a few weeks. It was available for X Windows computers. To announce the browser, Andreessen posted a download link to the www-talk mailing list, with the message “By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released.” The web got more than just a popular browser. It got its first pitchman.

That first version of the browser was impressive in a somewhat crowded field. To be sure, it had forms and some media support early on. But it wasn’t the best browser, nor was it the most advanced browser. Instead, Andreessen and Bina focused on something else entirely. Mosaic set itself apart because it was the easiest to use. The installation process was simple and the interface was, relatively speaking, intuitive.

The Mosaic browser’s secret weapon was its iteration. Before long, other programmers at NCSA wanted in on the project. They parceled off different operating systems to port the browser to. One team took the Mac, another Windows. By the fall of 1993, a few months after its initial release, Mosaic had feature-paired versions on Mac, Windows and Unix systems, as well as compatible server software.

After that, the pace of development only accelerated. Beta versions were released often and were available to download via FTP. New features were added at a rapid pace and new versions seemed to ship every week. The NCSA Mosaic was fully engaged with the web community, active in the www-talk mailing list, talking with users and gathering bug reports. It was not at all unusual to submit a bug report and hear back a few hours later from an NCSA programmer with a fix.

Andreessen was a particularly active presence, posting to threads almost daily. When the Mosaic team decided they might want to collect anonymous analytics about browser usage, Andreessen polled the www-talk list to see if it was a good idea. When he got a lot of questions about how to use HTML, he wrote a guide for beginners.

When one Mosaic user posted some issues he was having, it led to a tense back and forth between that user and Andreessen. He claimed he wasn’t a customer, and Andreessen shouldn’t care too much about what he thought. Andreessen replied, “We do care what you think simply because having the wonderful distributed beta team that we essentially have due to this group gives us the opportunity to make our product much better than it could be otherwise.” What Andreessen understood better than any of the early browser makers was that Mosaic was a product, and feedback from his users could drive its development. If they kept the feedback loop tight, they could keep the interface clean and bug-free while staying on the cutting edge of new features. It was the programming parable given enough eyeballs, all bugs are shallow come to life in browser development.

There was an electricity to Mosaic development at NCSA. Internal competition fueled OS teams to get features out the door. Sometimes the Mac version would get to something first. Sometimes it was Bina and Andreessen continuing to work on X-Mosaic. “We would get together, middle of the night, and come up with some cool idea — images was an example of that — then we would go off and race and see who would do it first,” creator of the Windows version of Mosaic Jon Mittelhauser later recalled. Sometimes, the features were duds and would hardly go anywhere at all. Other times, as Mittelhauser points out, they were absolutely essential.

In the months after launch, they started to surpass the feature list of even their nearest competitor, ViolaWWW. They added forms support and rich media. They added bookmarks for users to keep track of their links. They even created their own “What’s New” page, updated every single day, which tracked the web’s most popular links. When you opened up Mosaic, the NCSA What’s New page was the first thing you saw. They weren’t just building a browser. They were building a window to the web.

As Mittelhauser points out, it was the <img> tag which became Mosaic’s defining feature. It succeeded in doing two things. The tag was added without input from Sir Tim Berners-Lee or the wider web community. (Andreessen posted a note to www-talk only after it had already been implemented.) So firstly, that set the Mosaic team in a conflict with other browser makers and some parts of the web community that would last for years.

Secondly, it made Mosaic infinitely more popular. The <img> tag allowed for images to be embedded directly inline in the Mosaic browser. People found the web boring to browse. It was sterile, rigid, and scientific. Inline images changed all that. Within a few months, a new class of web designer was beginning to experiment with what was possible with images on the web. In some ways, it was the tag that made the web famous.

The image tag prompted the feature in The New York Times, and a subsequent write-up in Wired. By the time the press got around to talking about the web, Mosaic was the most popular browser and became a surrogate for the larger web world. “Mosaic” was to browsing the web as “Google” is to searching now.

Ultimately, the higher ups got involved. NCSA was not a tech company. They were a supercomputing lab. They came in to help make the Mosaic browser more cohesive, and maybe, more profitable. Licenses were parceled out to a dozen or so companies. Mosaic was bundled into Spry’s Internet in a Box product. It was embedded in enterprise software by the Santa Cruz Operation.

In the end, Mosaic split off into two directions. Pressure from management pushed Andreessen to leave and start a new company. It would be called Netscape. Another of the licensees of the software was a company called Spyglass. They were beginning to have talks with Microsoft. Both would ultimately choose to rewrite the Mosaic browser from scratch, for different reasons. Yet that browser would be their starting point and their products would have lasting implications on the browser market for decades as the world began to see its first commercial browsers.

The post Chapter 2: Browsers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The cult of the free must die

QuirksBlog - Wed, 08/12/2020 - 4:13am

For a terrible half an hour last night I though Firefox was on the brink of disappearing as an independent browser and rendering engine. I was frightened silly by the rumour that Mozilla laid off significant portions of its browser team. The official press release being full of useless corporate open web blah didn’t help — the vaguer the press release, the worse the situation, in my experience.

Fortunately, the news is not quite that bad. Still, it set me thinking.

To my mind, Mozilla’s core problem is the cult of the free. To my mind, we should eradicate the cult of the free from web development, and Mozilla should take a small step in that direction by requesting donations from inside Firefox — on an entirely voluntary basis.

First, though, let’s review the news. From what I’ve been able to cobble together from Twitter, the layoffs have severely affected devtools, the Servo team, MDN, and likely also the events/sponsorships team. The core Gecko team is unaffected, and that’s good. Firefox will continue to exist as an independent browser — for now.

From a money-saving perspective the gutting of the events team is understandable. Frankly, I’ve been wondering for years where Mozilla got the money to run all their events and sponsorships. Now we know the answer, I guess. It will have a minor effect on me as a conference organiser, but it can be survived.

The other three are more serious: although the core Gecko team survives, many supporting teams have been gutted. If this is the last round of layoffs ... ok, we can survive this. But if it isn’t ...

The cult of the free

The gutting of MDN strikes closest to home. Part of the reason I stopped doing browser research is the advent of MDN: it took over my role of documenting web standards (and sometimes browser differences) quite efficiently, so this site was less vitally necessary.

The bigger reason I stopped doing research, however, was that I was tired of doing all of this vital web-supporting work for free. That’s also the main reason I never contributed to MDN; I don’t mind doing it, but I do very much mind doing it for free. I’ve done my duty.

And who is going to give web standard and browser compatibility information now that MDN might go away? For a few minutes I considered returning to my old job, but I refuse to do it for free, and there is no financial support in sight.

Truth, I made some money with my work. In the end Google, Microsoft, and Mozilla granted me sponsorships for a few years. It wasn’t enough, though. And of course Apple never paid up, even though from the release of Safari/Mac to the release of the iPhone they referred to my site for JavaScript compatibility information. For a while I was the Safari documentation team — for free.

This brings us to the core point I’d like to make: the culture of volunteering in web development, and especially within the Mozilla segments of our community. To my mind it’s not only outdated and should be replaced, it should never have been allowed to take root in the first place.

I see the cult of the free as the web’s original sin. To my mind it’s an essentially random historical development that could have gone quite differently, but, once the idea of everything on the web being free took root, became a cultural touch point that is almost impossible to dislodge.

Granted, the cult of the free also has its positive points. But today I’m focusing on the negative ones that, to my mind, outweigh the positivity by a rather large margin. If we continue to give everything away for free, the big companies will win.

Small company gives away software for free. Large companies give away the same software for free, and to them the cost is essentially peanuts, and they own the platforms the sofware runs on. Therefore small company will lose, decreasing diversity in the browser market. Simple as that.

Asking for donations

Isn’t it time to change this? Isn’t it time Mozilla distances itself from the cult of the free? I know it’s deep in their DNA, but that hasn’t prevented it from hitting a very rough spot. Maybe the model is not as viable as we all thought.

So allow me to make a modest proposal: build in a donations function in Firefox itself — for instance by adding a simple “Please support us” message to the update page you get to see whenever you update the browser, and by adding a Donations item to the main menu.

Oh, and don’t bother with perks for paying members. It’s not about perks, it’s about supporting the software you’re using. The software is the perk.

I’m not saying Mozilla should erect a paywall around Firefox. That would far worse than the problem it’s supposed to solve. (The fact that it would be such a terribly bad move is part of the problem, though. If people had just learned to pay for the good stuff ...)

I’m also not saying this will solve Mozilla’s financial problems — in fact, I’m quite certain that it won’t. Still, it would be one step in the direction of a better web where consumers slowly get used to the idea of paying. Also, it might help Mozilla itself veer away from the cult of the free towards a more sustainable model, mostly by putting psychological pressure on the organisation as a whole.

We need a break with the past. Trying times like these might be the best opportunity to make that break.

The cult of the free must die so that Firefox may live.

Halfmoon: A Bootstrap Alternative with Dark Mode Built In

Css Tricks - Tue, 08/11/2020 - 4:36am

I recently launched the first production version of Halfmoon, a front-end framework that I have been building for the last few months. This is a short introductory post about what the framework is, and why I decided to build it.

The elevator pitch

Halfmoon is a front-end framework with a few interesting things going for it:

  • Dark mode built right in: Creating a dark mode version of a site is baked in and a snap.
  • Modular components: A lot of consideration has gone into making modular components — such as forms, navbars, sidebars, dropdowns, toasts, shortcuts, etc. — that can be used anywhere to make layouts, even complex ones like dashboards.
  • JavaScript is optional: Many of the components found in Halfmoon are built to work without JavaScript. However, the framework still comes with a powerful JavaScript library with no extra dependencies.
  • All the CSS classes you need: The class names should be instantly familiar to anyone who has used Bootstrap because that was the inspiration.
  • Cross-browser compatibility: Halfmoon fully supports nearly every browser under the sun, including really old ones like Internet Explorer 11.
  • Easily customizable: Halfmoon uses custom CSS properties for things like colors and layouts, making it extremely easy to customize things to your liking, even without a CSS preprocessor.

In many ways, you can think of Halfmoon as Bootstrap with an integrated dark mode implementation. It uses a lot of Bootstrap’s components with slightly altered markup in many cases.

OK, great, but why this framework?

Whenever a new framework is introduced, the same question is inevitably pops up: Why did you actually build this? The answer is that I freaking love dark modes and themes. Tools that come with both a light and a dark mode (along with a toggle switch) are my favorite because I feel that being able to change a theme on a whim makes me less likely to get bored looking at it for hours. I sometimes read in dim lighting conditions (pray for my eyes), and dark modes are significantly more comfortable in that type of situation. 

Anyway, a few months ago, I wanted to build a simple tool for myself that makes dark mode implementation easy for a dashboard project I was working on. After doing some research, I concluded that I had only two viable options: either pickup a JavaScript-based component library for a front-end framework — like Vuetify for Vue — or shell out some cash for a premium dark theme for Bootstrap (and I did not like the look of the free ones). I did not want to use a component library because I like building simple server-rendered websites using Django. That’s just my cup of tea. Therefore, I built what I needed: a free, good-looking front-end framework that’s along the same lines as Bootstrap, but includes equally good-looking light and dark themes out of the box.

Future plans

I just wanted to share Halfmoon with you to let you know that it exists and is freely available if you happen to be looking for an extensible framework in the same vein as Bootstrap that prioritizes dark mode in the implementation.

And, as you might imagine, I’m still working on Halfmoon. In fact I have plenty of enhancements in mind:

  • More components
  • More customization options (using CSS variables)
  • More examples and templates
  • Better tooling
  • Improved accessibility examples in the docs
  • Vanilla JavaScript implementations of useful components, such as custom multi-select (think Select2, only without jQuery), data tables and form validators, among other things.

In short, the plan is to build a framework that is really useful when it comes to building complex dashboards, but is still great for building any website. The documentation for the framework can be found on the project’s website. The code is all open-source and licensed under MIT. You can also follow the project on GitHub. I’d love for you to check it out, leave feedback, open issues, or even contribute to it.

The post Halfmoon: A Bootstrap Alternative with Dark Mode Built In appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Register for An Event Apart’s Front-End Focus Online Conference

Css Tricks - Tue, 08/11/2020 - 4:35am

(This is a sponsored post.)

An Event Apart has been doing these single-day online “Online together” conferences. You can check out the last couple, which are available on-demand (buy it, watch it when you want) for a limited time:

The next event is one that anyone reading CSS-Tricks will really want to check out. It’s called “Front-End Focus” which is literally what we write about here all the time. Register today for the event! It takes place August 17 and features a schedule packed with amazing talks from amazing speakers, like design principles from Jeremy Keith, future-proofing CSS from Ire Aderinokun, and modern CSS tricks from Una Kravets, among several others.

Direct Link to ArticlePermalink

The post Register for An Event Apart’s Front-End Focus Online Conference appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

System UIcons

Css Tricks - Mon, 08/10/2020 - 1:28pm

This is a great collection of icons by Corey Ginnivan that’s both free and with no attribution required when you use them. The style is super simple. Each icon looks like older versions of the icons from macOS to me because they’re cute but not too cute.

Also? The icon picker UI is slick and looks something like this today:

Oh and also, as I was looking around Corey’s personal site I noticed this lovely UI effect when you scroll —each card stacks on top of each other:

Direct Link to ArticlePermalink

The post System UIcons appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

zerodivs.com

Css Tricks - Mon, 08/10/2020 - 4:32am

Pretty neat little website from Joan Perals, inspired by stuff like Lynn’s A Single Div. With multiple hard-stop background-image gradients, you don’t need extra HTML elements to draw shapes — you can draw as many shapes as you want on a single element. There is even a stacking order to work with. Drawing with backgrounds is certainly CSS trickery!

The site stores your drawing IDs in localStorage so you’ve got basic CRUD functionality right there. I bet the whole thing is a little hop away from being an offline PWA.

Direct Link to ArticlePermalink

The post zerodivs.com appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

More Control Over CSS Borders With background-image

Css Tricks - Fri, 08/07/2020 - 11:48am

You can make a typical CSS border dashed or dotted. For example:

.box { border: 1px dashed black; border: 3px dotted red; }

You don’t have all that much control over how big or long the dashes or gaps are. And you certainly can’t give the dashes slants, fading, or animation! You can do those things with some trickery though.

Amit Sheen build this really neat Dashed Border Generator:

CodePen Embed Fallback

The trick is using four multiple backgrounds. The background property takes comma-separated values, so by setting four backgrounds (one along the top, right, bottom, and left) and sizing them to look like a border, it unlocks all this control.

So like:

.box { background-image: repeating-linear-gradient(0deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(90deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(180deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(270deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px); background-size: 3px 100%, 100% 3px, 3px 100% , 100% 3px; background-position: 0 0, 0 0, 100% 0, 0 100%; background-repeat: no-repeat; }

I like gumdrops.

The post More Control Over CSS Borders With background-image appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What does 100% mean in CSS?

Css Tricks - Fri, 08/07/2020 - 9:08am

When using percentage values in CSS like this…

.element { margin-top: 40%; }

…what does that % value mean here? What is it a percentage of? There’ve been so many times when I’ll be using percentages and something weird happens. I typically shrug, change the value to something else and move on with my day.

But Amelia Wattenberger says no! in this remarkable deep dive into how percentages work in CSS and all the peculiar things we need to know about them. And as is par for the course at this point, any post by Amelia has a ton of wonderful demos that perfectly describe how the CSS in any given example works. And this post is no different.

Direct Link to ArticlePermalink

The post What does 100% mean in CSS? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Every Website is an Essay

Css Tricks - Fri, 08/07/2020 - 4:38am

Every website that’s made me oooo and aaahhh lately has been of a special kind; they’re written and designed like essays. There’s an argument, a playfulness in the way that they’re not so much selling me something as they are trying to convince me of the thing. They use words and type and color in a way that makes me sit up and listen.

And I think that framing our work in this way lets us web designers explore exciting new possibilities. Instead of throwing a big carousel on the page and being done with it, thinking about making a website like an essay encourages us to focus on the tough questions. We need an introduction, we need to provide evidence for our statements, we need a conclusion, etc. This way we don’t have to get so caught up in the same old patterns that we’ve tried again and again in our work.

And by treating web design like an essay, we can be weird with the design. We can establish a distinct voice and make it sound like an honest-to-goodness human being wrote it, too.

One example of a website-as-an-essay is the Analogue Pocket site which uses real paragraphs to market their fancy new device.

Another example is the new email app Hey in which the website is nothing but paragraphs — no screenshots, no fancy product information. It feels like a political manifesto hammered onto a giant wooden door.

Apple’s marketing sites are little essays, too. Take this one section from the iPad Pro all about the LiDAR Scanner. It’s not so much trying to sell you an iPad at this point so much as it is trying to argue the case for LiDAR. And as with all good essays it answers the who, what, why, when, and how.

Another example is Stripe’s recent beautiful redesign. What I love more than the outrageously gorgeous animated gradients is the argument that the website is making. What is Stripe? How can I trust them? How easy is it to get set up? Who, what, why, when, how.

To be my own devil’s advocate for a bit though, we’re all familiar with this line of reasoning: Why care about the writing so much when people don’t read? Folks skim through a website. They don’t persevere with the text, they don’t engage with the writing, and you only have half a millisecond to hit them with something flashy before they leave. They can’t handle complex words or sentences. They can’t grasp complex ideas. So keep those paragraphs short! Remove all text from the page!

The implication here is that users are dumb. They can’t focus and they don’t care. You have to shout at them. And I kinda sorta hate that.

Instead, I think the opposite is true. They’ve seen the same boring websites for years. Everyone is tired of lifeless, humorless copywriting. They’ve seen all the animations, witnessed all the cool fonts, and in the face of all that stuff, they yawn. They yawn because it supports a bad argument, or more precisely, a bad essay; one that doesn’t charm the reader, or give them a reason to care.

So what if we made our websites more like essays and less like billboards that dot the freeways? What would that look like?

The post Every Website is an Essay appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

font-weight: 300 considered harmful

Css Tricks - Fri, 08/07/2020 - 4:38am

Tomáš Janoušek:

Many web pages these days set font-weight: 300 in their stylesheet. With DejaVu Sans as my preferred font, this results in very thin and light text that is hard to read, because for some reason the “DejaVu Sans ExtraLight” variant (weight 200) is being used for weights < 360 (in Chrome; in Firefox up to 399). Let’s investigate why this happens and what can be done about it.

Why are people setting font-weight: 300; at all? Well, Mac people, probably. On my macOS Catalina computer, look at the differences between some of the default and built-in fonts between 400 and 300.

400 weight on the top, 300 weight on the bottom

I wouldn’t blame a designer for going using a 300 weight in some cases. In fact, 11 years ago I published a snippet called “Better Helvetica” that touted this.

body { font-family: "HelveticaNeue-Light", "Helvetica Neue Light", "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif; font-weight: 300; }

But for Tomáš, whose default font is DejaVu Sans (a default on many Linux and Android systems) the font is difficult to read when the type is that thin. Part of the issue is that if a fallback font doesn’t happen to have a 300, the spec says it can fallback all the way to 100 if needed. I believe the technical term for that is pretty gosh-darned thin.

I’ll try to avoid that myself (or use it when I’m loading web fonts I know have it), but check out Tomáš’ article for a fix on your computer if this bugs you on many sites. This actually reminds me of the different Levels of Fix.

Direct Link to ArticlePermalink

The post font-weight: 300 considered harmful appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

JavaScript Fatigue

Css Tricks - Thu, 08/06/2020 - 11:32am

From Nicholas Zakas’ newsletter, on how he avoids JavaScript fatigue:

 I don’t try to learn about every new thing that comes out. There’s a limited number of hours in the day and a limited amount of energy you can devote to any topic, so I choose not to learn about anything that I don’t think will be immediately useful to me. That doesn’t mean I completely ignore new things as they are released, but rather, I try to fit them into an existing mental model so I can talk about them intelligently. For instance, I haven’t built an application in Deno, but I know that Deno is a JavaScript runtime for building server-side applications, just like Node.js. So I put Deno in the same bucket as Node.js and know that when I need to learn more, there’s a point of comparison.

I too have used the “buckets” analogy. Please allow me to continue and overuse this metaphor.

I rarely try to learn anything just for the sake of it. I learn what I need to learn as whatever I’m working on demands it. But I try to know enough so that when I read tech news, I can place what I’m reading into mental buckets. Then I can, metaphorically, reach into those buckets when those topics come up. Plus, I can keep an eye on how full those buckets are. If I find myself putting a lot of stuff into one bucket, I might plunge in there and see what’s going on as there is clearly something in the water.

You’ll need to subscribe to the newsletter to get to the content.

Direct Link to ArticlePermalink

The post JavaScript Fatigue appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building Custom Data Importers: What Engineers Need to Know

Css Tricks - Thu, 08/06/2020 - 4:37am

Importing data is a common pain-point for engineering teams. Whether its importing CRM data, inventory SKUs, or customer details, importing data into various applications and building a solution for this is a frustrating experience nearly every engineer can relate to. Data import, as a critical product experience is a huge headache. It reduces the time to value for customers, strains internal resources, and takes valuable development cycles away from developing key, differentiating product features.

Frequent error messages end-users receive when importing data. Why do we expect customers to fix this themselves?

Data importers, specifically CSV importers, haven’t been treated as key product features within the software, and customer experience. As a result, engineers tend to dedicate an exorbitant amount of effort creating less-than-ideal solutions for customers to successfully import their data.

Engineers typically create lengthy, technical documentation for customers to review when an import fails. However, this doesn’t truly solve the issue but instead offsets the burden of a great experience from the product to an end-user.

In this article, we’ll address the current problems with importing data and discuss a few key product features that are necessary to consider if you’re faced with a decision to build an in-house solution.

Importing data is typically frustrating for anyone involved at a data-led company. Simply put, there has never been a standard for importing customer data. Until now, teams have deferred to CSV templates, lengthy documentation, video tutorials, or buggy in-house solutions to allow users the ability to import spreadsheets. Companies trying to import CSV data can run into a variety of issues such as:

  • Fragmented data: With no standard way to import data, we get emails going back and forth with attached spreadsheets that are manually imported. As spreadsheets get passed around, there are obvious version control challenges. Who made this change? Why don’t these numbers add up as they did in the original spreadsheet? Why are we emailing spreadsheets containing sensitive data?
  • Improper formatting: CSV import errors frequently occur when formatting isn’t done correctly. As a result, companies often rely on internal developer resources to properly clean and format data on behalf of the client — a process that can take hours per customer, and may lead to churn anyway. This includes correcting dates or splitting fields that need to be healed prior to importing.
  • Encoding errors: There are plenty of instances where a spreadsheet can’t be imported when it’s not improperly encoded. For example, a company may need a file to be saved with UTF-8 encoding (the encoding typically preferred for email and web pages) in order to then be uploaded properly to their platform. Incorrect encoding can result in a lengthy chain of communication where the customer is burdened with correcting and re-importing their data.
  • Data normalization: A lack of data normalization results in data redundancy and a never-ending string of data quality problems that make customer onboarding particularly challenging. One example includes formatting email addresses, which are typically imported into a database, or checking value uniqueness, which can result in a heavy load on engineers to get the validation working correctly.
Remember building your first CSV importer?

When it comes down to creating a custom-built data importer, there are a few critical features that you should include to help improve the user experience. (One caveat – building a data importer can be time-consuming not only to create but also maintain – it’s easy to ensure your company has adequate engineering bandwidth when first creating a solution, but what about maintenance in 3, 6, or 12 months?)

A preview of Flatfile Portal. It integrates in minutes using a few lines of JavaScript. Data mapping

Mapping or column-matching (they are often used interchangeably) is an essential requirement for a data importer as the file import will typically fail without it. An example is configuring your data model to accept contact-level data. If one of the required fields is “address” and the customer who is trying to import data chooses a spreadsheet where the field is labeled “mailing address,” the import will fail because “mailing address” doesn’t correlate with “address” in a back-end system. This is typically ‘solved’ by providing a pre-built CSV template for customers, who then have to manipulate their data, effectively increasing time-to-value during a product experience. Data mapping needs to be included in the custom-built product as a key feature to retain data quality and improve the customer data onboarding experience.

Auto-column matching CSV data is the bread and butter of Portal, saving massive amounts of time for customers while providing a delightful import experience. Data validation

Data validation, which checks if the data matches an expected format or value, is another critical feature to include in a custom data importer. Data validation is all about ensuring the data is accurate and is specific to your required data model. For example, if special characters can’t be used within a certain template, error messages can appear during the import stage. Having spreadsheets with hundreds of rows containing validation errors results in a nightmare for customers, as they’ll have to fix these issues themselves, or your team, which will spend hours on end cleaning data. Automatic data validators allow for streamlining of healing incoming data without the need for a manual review.

We built Data Hooks into Portal to quickly normalize data on import. A perfect use-case would be validating email uniqueness against a database. Data parsing

Data parsing is the process of taking an aggregation of information (in a spreadsheet) and breaking it into discrete parts. It’s the separation of data. In a custom-built data importer, a data parsing feature should not only have the ability to go from a file to an array of discrete data but also streamline the process for customers.

Data transformation

Data transformation means making changes to imported data as it’s flowing into your system to meet an expected or desired value. Rather than sending data back to users with an error message for them to fix, data transformation can make small, systematic tweaks so that the users’ data is more usable in your backend. For example, when transferring a task list, prioritization data could be transformed into a different value, such as numbers instead of labels.

Data Hooks normalize imported customer data automatically using validation rules set in the Portal JSON config. These highly adaptable hooks can be worked to auto-validate nearly any incoming customer data.

We’ve baked all of the above features into Portal, our flagship CSV importer at Flatfile. Now that we’ve reviewed some of the must-have features of a data importer, the next obvious question for an engineer building an in-house importer is typically… should they?

Engineering teams that are taking on this task typically use custom or open source solutions, which may not adhere to specific use-cases. Building a comprehensive data importer also brings UX challenges when building a UI and maintaining application code to handle data parsing, normalization, and mapping. This is prior to considering how customer data requirements may change in future months and the ramifications of maintaining a custom-built solution.

Companies facing data import challenges are now considering integrating a pre-built data importer such as Flatfile Portal. We’ve built Portal to be the elegant import button for web apps. With just a few lines of JavaScript, Portal can be implemented alongside any data model and validation ruleset, typically in a few hours. Engineers no longer need to dedicate hours cleaning up and formatting data, nor do they need to custom build a data importer (unless they want to!). With Flatfile, engineers can focus on creating product-differentiating features, rather than work on solving spreadsheet imports.

Importing data is wrought with challenges and there are several critical features necessary to include when building a data importer. The alternative to a custom-built solution is to look for a pre-built data importer such as Portal.

Flatfile’s mission is to remove barriers between humans and data. With AI-assisted data onboarding, they eliminate repetitive work and make B2B data transactions fast, intuitive, and error-free. Flatfile automatically learns how imported data should be structured and cleaned, enabling customers and teams to spend more time using their data instead of fixing it. Flatfile has transformed over 300 million rows of data for companies like ClickUp, Blackbaud, Benevity, and Toast. To learn more about Flatfile’s products, Portal and Concierge, visit flatfile.io.

The post Building Custom Data Importers: What Engineers Need to Know appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Warp SVG Online

Css Tricks - Wed, 08/05/2020 - 1:15pm

The warping is certainly the cool part here. Some fancy math literally transforms the path data to do the warping. But the UX detail work here is just as nice. Scrolling the page zooms in and out via a transform: scale() on the SVG wrapper (clever!). Likewise, holding the spacebar lets you pan around which is as simple as transform: translate() on another wrapper (smart!). To warp your own SVG files, you just drag-and-drop them on the page (easy!).

Direct Link to ArticlePermalink

The post Warp SVG Online appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 1: Birth

Css Tricks - Wed, 08/05/2020 - 7:09am

Sir Tim Berners-Lee is fascinated with information. It has been his life’s work. For over four decades, he has sought to understand how it is mapped and stored and transmitted. How it passes from person to person. How the seeds of information become the roots of dramatic change. It is so fundamental to the work that he has done that when he wrote the proposal for what would eventually become the World Wide Web, he called it “Information Management, a Proposal.”

Information is the web’s core function. A series of bytes stream across the world and at the end of it is knowledge. The mechanism for this transfer — what we know as the web — was created by the intersection of two things. The first is the Internet, the technology that makes it all possible. The second is hypertext, the concept that grounds its use. They were brought together by Sir Tim Berners-Lee. And when he was done he did something truly spectacular. He gave it away to everyone to use for free.

When Berners-Lee submitted “Information Management, a Proposal” to his superiors, they returned it with a comment on the top that read simply:

Vague, but exciting…

The web wasn’t a sure thing. Without the hindsight of today it looked far too simple to be effective. In other words, it was a hard sell. Berners-Lee was proficient at many things, but he was never a great salesman. He loved his idea for the web. But he had to convince everybody else to love it too.

Sir Tim Berners-Lee has a mind that races. He has been known — based on interviews and public appearances — to jump from one idea to the next. He is almost always several steps ahead of what he is saying, which is often quite profound. Until recently, he only gave a rare interview here and there, and masked his greatest achievements with humility and a wry British wit.

What is immediately apparent is that Sir Tim Berners-Lee is curious. Curious about everything. It has led him to explore some truly revolutionary ideas before they became truly revolutionary. But it also means that his focus is typically split. It makes it hard for him to hold on to things in his memory. “I’m certainly terrible at names and faces,” he once said in an interview. His original fascination with the elements for the web came from a very personal need to organize his own thoughts and connect them together, disparate and unconnected as they are. It is not at all unusual that when he reached for a metaphor for that organization, he came up with a web.

As a young boy, his curiosity was encouraged. His parents, Conway Berners-Lee and Mary Lee Woods, were mathematicians. They worked on the Ferranti Mark I, the world’s first commercially available computer, in the 1950s. They fondly speak of Berners-Lee as a child, taking things apart, experimenting with amateur engineering projects. There was nothing that he didn’t seek to understand further. Electronics — and computers specifically — were particularly enchanting.

Berners-Lee sometimes tells the story of a conversation he had with his with father as a young boy about the limitations of computers making associations between information that was not intrinsically linked. “The idea stayed with me that computers could be much more powerful,” Berners-Lee recalls, “if they could be programmed to link otherwise unconnected information. In an extreme view, the world can been seen as only connections.” He didn’t know it yet, but Berners-Lee had stumbled upon the idea of hypertext at a very early age. It would be several years before he would come back to it.

History is filled with attempts to organize knowledge. An oft-cited example is the Library of Alexandria, a fabled library of Ancient Greece that was thought to have had tens of thousands of meticulously organized texts.

Photo via

At the turn of the century, Paul Otlet tried something similar in Belgium. His project was called the Répertoire Bibliographique Universel (Universal Bibliography). Otlet and a team of researchers created a library of over 15 million index cards, each with a discrete and small piece of information in topics ranging from science to geography. Otlet devised a sophisticated numbering system that allowed him to link one index card to another. He fielded requests from researchers around the world via mail or telegram, and Otlet’s researchers could follow a trail of linked index cards to find an answer. Once properly linked, information becomes infinitely more useful.

A sudden surge of scientific research in the wake of World War II prompted Vanneaver Bush to propose another idea. In his groundbreaking essay in The Atlantic in 1945 entitled “As We May Think,” Bush imagined a mechanical library called a Memex. Like Otlet’s Universal Bibliography, the Memex stored bits of information. But instead of index cards, everything was stored on compact microfilm. Through the process of what he called “associative indexing,” users of the Memex could follow trails of related information through an intricate web of links.

The list of attempts goes on. But it was Ted Neslon who finally gave the concept a name in 1968, two decades after Bush’s article in The Atlantic. He called it hypertext.

Hypertext is, essentially, linked text. Nelson observed that in the real world, we often give meaning to the connections between concepts; it helps us grasp their importance and remember them for later. The proximity of a Post-It to your computer, the orientation of ingredients in your refrigerator, the order of books on your bookshelf. Invisible though they may seem, each of these signifiers hold meaning, whether consciously or subconsciously, and they are only fully realized when taking a step back. Hypertext was a way to bring those same kinds of meaningful connections to the digital world.

Nelson’s primary contribution to hypertext is a number of influential theories and a decades-long project still in progress known as Xanadu. Much like the web, Xanadau uses the power of a network to create a global system of links and pages. However, Xanadu puts a far greater emphasis on the ability to trace text to its original author for monetization and attribution purposes. This distinction, known as transculsion, has been a near impossible technological problem to solve.

Nelson’s interest in hypertext stems from the same issue with memory and recall as Berners-Lee. He refers to it is as his hummingbird mind. Nelson finds it hard to hold on to associations he creates in the real world. Hypertext offers a way for him to map associations digitally, so that he can call on them later. Berners-Lee and Nelson met for the first time a couple of years after the web was invented. They exchanged ideas and philosophies, and Berners-Lee was able to thank Nelson for his influential thinking. At the end of the meeting, Berners-Lee asked if he could take a picture. Nelson, in turn, asked for a short video recording. Each was commemorating the moment they knew they would eventually forget. And each turned to technology for a solution.

By the mid-80s, on the wave of innovation in personal computing, there were several hypertext applications out in the wild. The hypertext community — a dedicated group of software engineers that believed in the promise of hypertext – created programs for researchers, academics, and even off-the-shelf personal computers. Every research lab worth their weight in salt had a hypertext project. Together they built entirely new paradigms into their software, processes and concepts that feel wonderfully familiar today but were completely outside the realm of possibilities just a few years earlier.

At Brown University, the very place where Ted Nelson was studying when he coined the term hypertext, Norman Meyrowitz, Nancy Garrett, and Karen Catlin were the first to breathe life into the hyperlink, which was introduced in their program Intermedia. At Symbolics, Janet Walker was toying with the idea of saving links for later, a kind of speed dial for the digital world – something she was calling a bookmark. At the University of Maryland, Ben Schneiderman sought to compile and link the world’s largest source of information with his Interactive Encyclopedia System.

Dame Wendy Hall, at the University of Southhampton, sought to extend the life of the link further in her own program, Microcosm. Each link made by the user was stored in a linkbase, a database apart from the main text specifically designed to store metadata about connections. In Microcosm, links could never die, never rot away. If their connection was severed they could point elsewhere since links weren’t directly tied to text. You could even write a bit of text alongside links, expanding a bit on why the link was important, or add to a document separate layers of links, one, for instance, a tailored set of carefully curated references for experts on a given topic, the other a more laid back set of links for the casual audience.

There were mailing lists and conferences and an entire community that was small, friendly, fiercely competitive and locked in an arms race to find the next big thing. It was impossible not to get swept up in the fervor. Hypertext enabled a new way to store actual, tangible knowledge; with every innovation the digital world became more intricate and expansive and all-encompassing.

Then came the heavy hitters. Under a shroud of mystery, researchers and programmers at the legendary Xerox PARC were building NoteCards. Apple caught wind of the idea and found it so compelling that they shipped their own hypertext application called Hypercard, bundled right into the Mac operating system. If you were a late Apple II user, you likely have fond memories of Hypercard, an interface that allowed you to create a card, and quickly link it to another. Cards could be anything, a recipe maybe, or the prototype of a latest project. And, one by one, you could link those cards up, visually and with no friction, until you had a digital reflection of your ideas.

Towards the end of the 80s, it was clear that hypertext had a bright future. In just a few short years, the software had advanced in leaps and bounds.

After a brief stint studying physics at The Queen’s College, Oxford, Sir Tim Berners-Lee returned to his first love: computers. He eventually found a short-term, six-month contract at the particle physics lab Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), or simply, CERN.

CERN is responsible for a long line of particle physics breakthroughs. Most recently, they built the Large Hadron Collider, which led to the confirmation of the Higgs Boson particle, a.k.a. the “God particle.”

CERN doesn’t operate like most research labs. Its internal staff makes up only a small percentage of the people that use the lab. Any research team from around the world can come and use the CERN facilities, provided that they are able to prove their research fits within the stated goals of the institution. A majority of CERN occupants are from these research teams. CERN is a dynamic, sprawling campus of researchers, ferrying from location to location on bicycles or mine-carts, working on the secrets of the universe. Each team is expected to bring their own equipment and expertise. That includes computers.

Berners-Lee was hired to assist with software on an earlier version of the particle accelerator called the Proton Synchrotron. When he arrived, he was blown away by the amount of pure, unfiltered information that flowed through CERN. It was nearly impossible to keep track of it all and equally impossible to find what you were looking for. Berners-Lee wanted to capture that information and organize it.

His mind flashed back to that conversation with his father all those years ago. What if it were possible to create a computer program that allowed you to make random associations between bits of information? What if you could, in other words, link one thing to another? He began working on a software project on the side for himself. Years later, that would be the same way he built the web. He called this project ENQUIRE, named for a Victorian handbook he had read as a child.

Using a simple prompt, ENQUIRE users could create a block of info, something like Otlet’s index cards all those years ago. And just like the Universal Bibliography, ENQUIRE allowed you to link one block to another. Tools were bundled in to make it easier to zoom back and see the connections between the links. For Berners-Lee this filled a simple need: it replaced the part of his memory that made it impossible for him to remember names and faces with a digital tool.

Compared to the software being actively developed at the University of Southampton or at Xerox or Apple, ENQUIRE was unsophisticated. It lacked a visual interface, and its format was rudimentary. A program like Hypercard supported rich-media and advanced two-way connections. But ENQUIRE was only Berners-Lee’s first experiment with hypertext. He would drop the project when his contract was up at CERN.

Berners-Lee would go and work for himself for several years before returning to CERN. By the time he came back, there would be something much more interesting for him to experiment with. Just around the corner was the Internet.

Packet switching is the single most important invention in the history of the Internet. It is how messages are transmitted over a globally decentralized network. It was discovered almost simultaneously in the late-60s by two different computer scientists, Donald Davies and Paul Baran. Both were interested in the way in which it made networks resilient.

Traditional telecommunications at the time were managed by what is known as circuit switching. With circuit switching, a direct connection is open between the sender and receiver, and the message is sent in its entirety between the two. That connection needs to be persistent and each channel can only carry a single message at a time. That line stays open for the duration of a message and everything is run through a centralized switch. 

If you’re searching for an example of circuit switching, you don’t have to look far. That’s how telephones work (or used to, at least). If you’ve ever seen an old film (or even a TV show like Mad Men) where an operator pulls a plug out of a wall and plugs it back in to connect a telephone call, that’s circuit switching (though that was all eventually automated). Circuit switching works because everything is sent over the wire all at once and through a centralized switch. That’s what the operators are connecting.

Packet switching works differently. Messages are divided into smaller bits, or packets, and sent over the wire a little at a time. They can be sent in any order because each packet has just enough information to know where in the order it belongs. Packets are sent through until the message is complete, and then re-assembled on the other side. There are a few advantages to a packet-switched network. Multiple messages can be sent at the same time over the same connection, split up into little packets. And crucially, the network doesn’t need centralization. Each node in the network can pass around packets to any other node without a central routing system. This made it ideal in a situation that requires extreme adaptability, like in the fallout of an atomic war, Paul Baran’s original reason for devising the concept.

When Davies began shopping around his idea for packet switching to the telecommunications industry, he was shown the door. “I went along to Siemens once and talked to them, and they actually used the words, they accused me of technical — they were really saying that I was being impertinent by suggesting anything like packet switching. I can’t remember the exact words, but it amounted to that, that I was challenging the whole of their authority.” Traditional telephone companies were not at all interested in packet switching. But ARPA was.

ARPA, later known as DARPA, was a research agency embedded in the United States Department of Defense. It was created in the throes of the Cold War — a reaction to the launch of the Sputnik satellite by Russia — but without a core focus. (It was created at the same time as NASA, so launching things into space was already taken.) To adapt to their situation, ARPA recruited research teams from colleges around the country. They acted as a coordinator and mediator between several active university research projects with a military focus.

ARPA’s organization had one surprising and crucial side effect. It was comprised mostly of professors and graduate students who were working at its partner universities. The general attitude was that as long as you could prove some sort of modest relation to a military application, you could pitch your project for funding. As a result, ARPA was filled with lots of ambitious and free-thinking individuals working inside of a buttoned-up government agency, with little oversight, coming up with the craziest and most world-changing ideas they could. “We expected that a professional crew would show up eventually to take over the problems we were dealing with,” recalls Bob Kahn, an ARPA programmer critical to the invention of the Internet. The “professionals” never showed up.

One of those professors was Leonard Kleinrock at UCLA. He was involved in the first stages of ARPANET, the network that would eventually become the Internet. His job was to help implement the most controversial part of the project, the still theoretical concept known as packet switching, which enabled a decentralized and efficient design for the ARPANET network. It is likely that the Internet would not have taken shape without it. Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet. By the end of the 1980s, the Internet went commercial and global, including a node at CERN.

Once packet switching was implemented, everything came together quickly. By the early 1980s, it was simply called the Internet.

The first applications of the Internet are still in use today. FTP, used for transferring files over the network, was one of the first things built. Email is another one. It had been around for a couple of decades on a closed system already. When the Internet began to spread, email became networked and infinitely more useful.

Other projects were aimed at making the Internet more accessible. They had names like Archie, Gopher, and WAIS, and have largely been forgotten. They were united by a common goal of bringing some order to the chaos of a decentralized system. WAIS and Archie did so by indexing the documents put on the Internet to make them searchable and findable by users. Gopher did so with a structured, hierarchical system. 

Kleinrock was there when the first message was ever sent over the Internet. He was supervising that part of the project, and even then, he knew what a revolutionary moment it was. However, he is quick to note that not everybody shared that feeling in the beginning. He recalls the sentiment held by the titans of the telecommunications industry like the Bell Telephone Company. “They said, ‘Little boy, go away,’ so we went away.” Most felt that the project would go nowhere, nothing more than a technological fad.

In other words, no one was paying much attention to what was going on and no one saw the Internet as much of a threat. So when that group of professors and graduate students tried to convince their higher-ups to let the whole thing be free — to let anyone implement the protocols of the Internet without a need for licenses or license fees — they didn’t get much pushback. The Internet slipped into public use and only the true technocratic dreamers of the late 20th century could have predicted what would happen next.

Berners-Lee returned to CERN in a fellowship position in 1984. It was four years after he had left. A lot had changed. CERN had developed their own network, known as CERNET, but by 1989, they arrived and hooked up to the new, internationally standard Internet. “In 1989, I thought,” he recalls, “look, it would be so much easier if everybody asking me questions all the time could just read my database, and it would be so much nicer if I could find out what these guys are doing by just jumping into a similar database of information for them.” Put another way, he wanted to share his own homepage, and get a link to everyone else’s.

What he needed was a way for researchers to share these “databases” without having to think much about how it all works. His way in with management was operating systems. CERN’s research teams all bring their own equipment, including computers, and there’s no way to guarantee they’re all running the same OS. Interoperability between operating systems is a difficult problem by design — generally speaking — the goal of an OS is to lock you in. Among its many other uses, a globally networked hypertext system like the web was a wonderful way for researchers to share notes between computers using different operating systems.

However, Berners-Lee had a bit of trouble explaining his idea. He’s never exactly been concise. By 1989, when he wrote “Information Management, a Proposal,” Berners-Lee already had worldwide ambitions. The document is thousands of words, filled with diagrams and charts. It jumps energetically from one idea to the next without fully explaining what’s just been said. Much of what would eventually become the web was included in the document, but it was just too big of an idea. It was met with a lukewarm response — that “Vague, but exciting” comment scrawled across the top.

A year later, in May of 1990, at the encouragement of his boss Mike Sendall (the author of that comment), Beners-Lee circulated the proposal again. This time it was enough to buy him a bit of time internally to work on it. He got lucky. Sendall understood his ambition and aptitude. He wouldn’t always get that kind of chance. The web needed to be marketed internally as an invaluable tool. CERN needed to need it. Taking complex ideas and boiling them down to their most salient, marketable points, however, was not Berners-Lee’s strength. For that, he was going to need a partner. He found one in Robert Cailliau.

Cailliau was a CERN veteran. By 1989, he’d worked there as a programmer for over 15 years. He’d embedded himself in the company culture, proving a useful resource helping teams organize their informational toolset and knowledge-sharing systems. He had helped several teams at CERN do exactly the kind of thing Berners-Lee was proposing, though at a smaller scale.

Temperamentally, Cailliau was about as different from Berners-Lee as you could get. He was hyper-organized and fastidious. He knew how to sell things internally, and he had made plenty of political inroads at CERN. What he shared with Berners-Lee was an almost insatiable curiosity. During his time as a nurse in the Belgian military, he got fidgety. “When there was slack at work, rather than sit in the infirmary twiddling my thumbs, I went and got myself some time on the computer there.” He ended up as a programmer in the military, working on war games and computerized models. He couldn’t help but look for the next big thing.

In the late 80s, Cailliau had a strong interest in hypertext. He was taking a look at Apple’s Hypercard as a potential internal documentation system at CERN when he caught wind of Berners-Lee’s proposal. He immediately recognized its potential.

Working alongside Berners-Lee, Cailliau pieced together a new proposal. Something more concise, more understandable, and more marketable. While Berners-Lee began putting together the technologies that would ultimately become the web, Cailliau began trying to sell the idea to interested parties inside of CERN.

The web, in all of its modern uses and ubiquity can be difficult to define as just one thing — we have the web on our refrigerators now. In the beginning, however, the web was made up of only a few essential features.

There was the web server, a computer wired to the Internet that can transmit documents and media (webpages) to other computers. Webpages are served via HTTP, a protocol designed by Berners-Lee in the earliest iterations of the web. HTTP is a layer on top of the Internet, and was designed to make things as simple, and resilient, as possible. HTTP is so simple that it forgets a request as soon as it has made it. It has no memory of the webpages its served in the past. The only thing HTTP is concerned with is the request it’s currently making. That makes it magnificently easy to use.

These webpages are sent to browsers, the software that you’re using to read this article. Browsers can read documents handed to them by server because they understand HTML, another early invention of Sir Tim Berners-Lee. HTML is a markup language, it allows programmers to give meaning to their documents so that they can be understood. The “H” in HTML stands for Hypertext. Like HTTP, HTML — all of the building blocks programmers can use to structure a document — wasn’t all that complex, especially when compared to other hypertext applications at the time. HTML comes from a long line of other, similar markup languages, but Berners-Lee expanded it to include the link, in the form of an anchor tag. The <a> tag is the most important piece of HTML because it serves the web’s greatest function: to link together information.

The hyperlink was made possible by the Universal Resource Identifier (URI) later renamed to the Uniform Resource Indicator after the IETF found the word “universal” a bit too substantial. But for Berners-Lee, that was exactly the point. “Its universality is essential: the fact that a hypertext link can point to anything, be it personal, local or global, be it draft or highly polished,” he wrote in his personal history of the web. Of all the original technologies that made up the web, Berners-Lee — and several others — have noted that the URL was the most important.

By Christmas of 1990, Sir Tim Berners-Lee had all of that built. A full prototype of the web was ready to go.

Cailliau, meanwhile, had had a bit of success trying to sell the idea to his bosses. He had hoped that his revised proposal would give him a team and some time. Instead he got six months and a single staff member, intern Nicola Pellow. Pellow was new to CERN, on placement for her mathematics degree. But her work on the Line Mode Browser, which enabled people from around the world using any operating system to browse the web, proved a crucial element in the web’s early success. Berners-Lee’s work, combined with the Line Mode Browser, became the web’s first set of tools. It was ready to show to the world.

When the team at CERN submitted a paper on the World Wide Web to the San Antonio Hypertext Conference in 1991, it was soundly rejected. They went anyway, and set up a table with a computer to demo it to conference attendees. One attendee remarked:

They have chutzpah calling that the World Wide Web!

The highlight of the web is that it was not at all sophisticated. Its use of hypertext was elementary, allowing for only simplistic text based links. And without two-way links, pretty much a given in hypertext applications, links could go dead at any minute. There was no linkbase, or sophisticated metadata assigned to links. There was just the anchor tag. The protocols that ran on top of the Internet were similarly basic. HTTP only allowed for a handful of actions, and alternatives like Gopher or WAIS offered far more options for advanced connections through the Internet network.

It was hard to explain, difficult to demo, and had overly lofty ambition. It was created by a man who didn’t have much interest in marketing his ideas. Even the name was somewhat absurd. “WWW” is one of only a handful of acronyms that actually takes longer to say than the full “World Wide Web.”

We know how this story ends. The web won. It’s used by billions of people and runs through everything we do. It is among the most remarkable technological achievements of the 20th century.

It had a few advantages, of course. It was instantly global and widely accessible thanks to the Internet. And the URL — and its uniqueness — is one of the more clever concepts to come from networked computing.

But if you want to truly understand why the web succeeded we have to come back to information. One of Berners-Lee’s deepest held beliefs is that information is incredibly powerful, and that it deserves to be free. He believed that the Web could deliver on that promise. For it to do that, the web would need to spread.

Berners-Lee looked to his successors for inspiration: the Internet. The Internet succeeded, in part, because they gave it away to everyone. After considering several licensing options, he lobbied CERN to release the web unlicensed to the general public. CERN, an organization far more interested in particle physics breakthroughs than hypertext, agreed. In 1993, the web officially entered the public domain.

And that was the turning point. They didn’t know it then, but that was the moment the web succeeded. When Berners-Lee was able to make globally available information truly free.

In an interview some years ago, Berners-Lee recalled how it was that the web came to be.

I had the idea for it. I defined how it would work. But it was actually created by people.

That may sound like humility from one of the world’s great thinkers — and it is that a little — but it is also the truth. The web was Berners-Lee’s gift to the world. He gave it to us, and we made it what it was. He and his team fought hard at CERN to make that happen.

Berners-Lee knew that with the resources available to him he would never be able to spread the web sufficiently outside of the hallways of CERN. Instead, he packaged up all the code that was needed to build a browser into a library called libwww and posted it to a Usenet group. That was enough for some people to get interested in browsers. But before browsers would be useful, you needed something to browse.

The post Chapter 1: Birth appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Cicada Principle, revisited with CSS variables

Css Tricks - Tue, 08/04/2020 - 1:13pm

Lea Verou digging up the CSS trickery classic and applying it to clip the backgrounds of some code blocks:

The main idea is simple: You write your main rule using CSS variables, and then use :nth-of-*() rules to set these variables to something different every N items. If you use enough variables, and choose your Ns for them to be prime numbers, you reach a good appearance of pseudo-randomness with relatively small Ns.

The update to the trick is that she doesn’t update the whole block of code for each of the selectors, it’s just one part of the clipping path that gets updated via a custom property.

CodePen Embed Fallback

Lea’s post has more trickery using the same concepts.

Direct Link to ArticlePermalink

The post The Cicada Principle, revisited with CSS variables appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Jetpack CRM

Css Tricks - Tue, 08/04/2020 - 4:39am

About a year ago, Automattic bought up Zero BS CRM. The thinking at the time was that it could be rebranded into the Jetpack suite and, well, that happened.

CRM meaning “Customer Relationship Management” if you’re like me and this is a little outside your sphere of everyday software. CRMs are big business though. When I lived in Silicon Valley, I would regularly drive by a very large building bearing the name of a CRM company that you’ve definitely heard of.

The name is a good one since CRMs are for helping you manage your customers. I have a brother-in-law who sells office furniture. He definitely uses a CRM. He gets leads for new clients, he helps figure out what they need, gets them quotes, and deals with billing them. It sounds straightforward, but it kinda isn’t. CRMs are complicated. Who better to bring you a good CRM than someone who brings you a good CMS?

The fact is that people are different and businesses are different, means that rather than forcing your business to behave like a CRM thinks it should, the CRM should flex to what you need. Jetpack CRM calls it a DIY CRM:

We let entrepreneurs pick and choose only what they need.

What appeals to me about Jetpack CRM is that you can leverage software that you already have and use. I swear the older I get the more cautious I become about adding major new software products to my life. There are always learning curves that are steeper than you want, gotchas you didn’t forsee, and maintenance you end up loathing. The idea of adding major new functionality under the WordPress roof feels much better to me. Everytime I do that with WordPress, I end up thinking it was the right move. I’ve done it now with both forums, newsletters, and eCommerce and each time was great.

I think it’s a smart move for Automattic. People should be able to do just about anything with their website, especially when it has the power of a login system, permission system, database, and all this infrastructure ready to go like any given WordPress site has. A CRM is a necessary thing for a lot of businesses and now Automattic has an answer to that.

Speaking of all the modularity of turning on and off features, you can also select a level that, essentially, specifies how much you want Jetpack CRM to take over your WordPress admin.

If running a CRM is the primary thing you’re doing with your WordPress site, Jetpack CRM can kinda take over and make the admin primarily that with the “CRM Only” setting. Or, just augment the admin and leave everything else as-is with the “Full” setting.

Like most things Jetpack these days, this is an ala-carte purchase if you need features beyond what the free plan offers. It’s not bundled with any other Jetpack stuff.

It’s also like WooCommerce in that there are a bunch of extensions for it that you only have to buy if you need them. For example, the $49 Gravity Forms extension allows you to build your lead forms with the Gravity Forms plugin, or it comes bundled in any of the paid plans. The $39 MailChimp plugin connects new leads to MailChimp lists.

Mike Stott over on WordPress Tavern:

“There’s a massive opportunity for CRM in the WordPress space,” said Stott. “A CRM is not like installing an SEO plugin on every website you own — generally you’d only have a single CRM for your business — but it’s the core of your business. The fact that 3,000+ users and counting are choosing WordPress to run their CRM is a great start.”

I’m a pretty heavy Jetpack user myself, using almost all its features.

Just to be clear, this isn’t some new feature of the core Jetpack plugin that’ll just show up on your site when you upgrade versions. It’s a totally separate product. In fact, the plugin folder you install to is called “zero-bs-crm,” the original name of the product.

I was able to install and play around with it right here on CSS-Tricks with no trouble at all. Being able to create invoices right here on the site is pretty appealing to me, so I’ll be playing with that some more.

The post Jetpack CRM appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Friction Logs

Css Tricks - Mon, 08/03/2020 - 1:27pm

I first heard the term “Friction Log” from Suz Hinton back in April on ShopTalk. The idea makes an extreme amount of sense: Use a thing, and write down moments where you felt friction.

Did some installation step bug out? Did you see something that the docs didn’t mention? Did you have to stop and wonder if some particular API existed or how it worked? Were you confused about why a certain thing was called what it was? Friction! Log it. Then use that log to smooth out that friction because whatever is being built will be better for it.

I mention this because my friend Rick Blalock just started frictionlog.com based on that idea. I’d say that it’s a little weird to watch content of someone else getting stuck, but I know better. People have told me time and time again that they enjoy watching me get stuck and unstuck in screencasts. This is like that, except these moments of getting stuck are moments of friction worth of calling out.

I’d love to see a moment in the industry where you pay people to do friction logs on your product. Like a form of extra-educated customer research.

Direct Link to ArticlePermalink

The post Friction Logs appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Making of: Netlify’s Million Devs SVG Animation Site

Css Tricks - Mon, 08/03/2020 - 5:04am

The following article captures the process of building the Million Developers microsite for Netlify. This project was built by a few folks and we’ve captured some parts of the process of building it here- focusing mainly on the animation aspects, in case any are helpful to others building similar experiences.

Building a Vue App out of an SVG

The beauty of SVG is you can think of it, and the coordinate system, as a big game of battleship. You’re really thinking in terms of x, y, width, and height.

<div id="app"> <app-login-result-sticky v-if="user.number" /> <app-github-corner /> <app-header /> <!-- this is one big SVG --> <svg id="timeline" xmlns="http://www.w3.org/2000/svg" :viewBox="timelineAttributes.viewBox"> <!-- this is the desktop path --> <path class="cls-1 timeline-path" transform="translate(16.1 -440.3)" d="M951.5,7107..." /> <!-- this is the path for mobile --> <app-mobilepath v-if="viewportSize === 'small'" /> <!-- all of the stations, broken down by year --> <app2016 /> <app2017 /> <app2018 /> <app2019 /> <app2020 /> <!-- the 'you are here' marker, only shown on desktop and if you're logged in --> <app-youarehere v-if="user.number && viewportSize === 'large'" /> </svg> </div>

Within the larger app component, we have the large header, but as you can see, the rest is one giant SVG. From there, we broke down the rest of the giant SVG into several components:

  • Candyland-type paths for both desktop and mobile, shown conditionally by a state in the Vuex store
  • There are 27 stations, not including their text counterparts, and many decorative components like bushes, trees, and streetlamps, which is a lot to keep track of in one component, so they’re broken down by year
  • The ‘you are here’ marker, only shown on desktop and if you’re logged in

SVG is wonderfully flexible because not only can we draw absolute and relative shapes and paths within that coordinate system, we can also draw SVGs within SVGs. We just need to defined the x, y, width and height of those SVGs and we can mount them inside the larger SVG, which is exactly what we’re going to do with all these components so that we can adjust their placement whenever needed. The <g> within the components stands for group, you can think of them a little like divs in HTML.

So here’s what this looks like within the year components:

<template> <g> <!-- decorative components --> <app-tree x="650" y="5500" /> <app-tree x="700" y="5550" /> <app-bush x="750" y="5600" /> <!-- station component --> <app-virtual x="1200" y="6000" xSmall="50" ySmall="15100" /> <!-- text component, with slots --> <app-text x="1400" y="6500" xSmall="50" ySmall="15600" num="20" url-slug="jamstack-conf-virtual" > <template v-slot:date>May 27, 2020</template> <template v-slot:event>Jamstack Conf Virtual</template> </app-text> ... </template> <script> ... export default { components: { // loading the decorative components in syncronously AppText, AppTree, AppBush, AppStreetlamp2, // loading the heavy station components in asyncronously AppBuildPlugins: () => import("@/components/AppBuildPlugins.vue"), AppMillion: () => import("@/components/AppMillion.vue"), AppVirtual: () => import("@/components/AppVirtual.vue"), }, }; ... </script>

Within these components, you can see a number of patterns:

  • We have bushes and trees for decoration that we can sprinkle around viax and y values via props
  • We can have individual station components, which also have two different positioning values, one for large and small devices
  • We have a text component, which has three available slots, one for the date, and two for two different text lines
  • We’re also loading in the decorative components synchronously, and loading those heavier SVG stations async
SVG Animation Header animation for Million Devs

The SVG animation is done with GreenSock (GSAP), with their new ScrollTrigger plugin. I wrote up a guide on how to work with GSAP for their latest 3.0 release earlier this year. If you’re unfamiliar with this library, that might be a good place to start.

Working with the plugin is thankfully straightforward, here is the base of the functionality we’ll need:

import { gsap } from "gsap"; import { ScrollTrigger } from "gsap/ScrollTrigger.js"; import { mapState } from "vuex"; gsap.registerPlugin(ScrollTrigger); export default { computed: { ...mapState([ "toggleConfig", "startConfig", "isAnimationDisabled", "viewportSize", ]), }, ... methods: { millionAnim() { let vm = this; let tl; const isScrollElConfig = { scrollTrigger: { trigger: `.million${vm.num}`, toggleActions: this.toggleConfig, start: this.startConfig, }, defaults: { duration: 1.5, ease: "sine", }, }; } }, mounted() { this.millionAnim(); }, };

First, we’re importing gsap and the package we need, as well as state from the Vuex store. I put the toggleActions and start config settings in the store and passed them into each component because while I was working, I needed to experiment with which point in the UI I wanted to trigger the animations, this kept me from having to configure each component separately.

Those configurations in the store look like this:

export default new Vuex.Store({ state: { toggleConfig: `play pause none pause`, startConfig: `center 90%`, } }

This configuration breaks down to

  • toggleConfig: play the animation when it passes down the page (another option is to say restart and it will retrigger if you see it again), it pauses when it is out of the viewport (this can slightly help with perf), and that it doesn’t retrigger in reverse when going back up the page.
  • startConfig is stating that when the center of the element is 90% down from the height of the viewport, to trigger the animation to begin.

These are the settings we decided on for this project, there are many others! You can understand all of the options with this video.

For this particular animation, we needed to treat it a little differently if it was a banner animation which didn’t need to be triggered on scroll or if it was later in the timeline. We passed in a prop and used that to pass in that config depending on the number in props:

if (vm.num === 1) { tl = gsap.timeline({ defaults: { duration: 1.5, ease: "sine", }, }); } else { tl = gsap.timeline(isScrollElConfig); }

Then, for the animation itself, I’m using what’s called a label on the timeline, you can think of it like identifying a point in time on the playhead that you may want to hang animations or functionality off of. We have to make sure we use the number prop for the label too, so we keep the timelines for the header and footer component separated.

tl.add(`million${vm.num}`) ... .from( "#front-leg-r", { duration: 0.5, rotation: 10, transformOrigin: "50% 0%", repeat: 6, yoyo: true, ease: "sine.inOut", }, `million${vm.num}` ) .from( "#front-leg-l", { duration: 0.5, rotation: 10, transformOrigin: "50% 0%", repeat: 6, yoyo: true, ease: "sine.inOut", }, `million${vm.num}+=0.25` );

There’s a lot going on in the million devs animation so I’ll just isolate one piece of movement to break down: above we have the girls swinging legs. We have both legs swinging separately, both are repeating several times, and that yoyo: true lets GSAP know that I’d like the animation to reverse every other alteration. We’re rotating the legs, but what makes it realistic is the transformOrigin starts at the center top of the leg, so that when it’s rotating, it’s rotating around the knee axis, like knees do :)

Adding an Animation Toggle

We wanted to give users the ability to explore the site without animation, should they have a vestibular disorder, so we created a toggle for the animation play state. The toggle is nothing special- it updates state in the Vuex store through a mutation, as you might expect:

export default new Vuex.Store({ state: { ... isAnimationDisabled: false, }, mutations: { updateAnimationState(state) { state.isAnimationDisabled = !state.isAnimationDisabled }, ... })

The real updates happen in the topmost App component where we collect all of the animations and triggers, and then adjust them based on the state in the store. We watch the isAnimationDisabled property for changes, and when one occurs, we grab all instances of scrolltrigger animations in the app. We don’t .kill() the animations, which one option, because if we did, we wouldn’t be able to restart them.

Instead, we either set their progress to the final frame if animations are disabled, or if we’re restarting them, we set their progress to 0 so they can restart when they are set to fire on the page. If we had used .restart() here, all of the animations would have played and we wouldn’t see them trigger as we kept going down the page. Best of both worlds!

watch: { isAnimationDisabled(newVal, oldVal) { ScrollTrigger.getAll().forEach((trigger) => { let animation = trigger.animation; if (newVal === true) { animation && animation.progress(1); } else { animation && animation.progress(0); } }); }, }, SVG Accessibility

I am by no means an accessibility expert, so please let me know if I’ve misstepped here- but I did a fair amount of research and testing on this site, and was pretty excited that when I tested on my Macbook via voiceover, the site’s pertinent information was traversable, so I’m sharing what we did to get there.

For the initial SVG that cased everything, we didn’t apply a role so that the screenreader would traverse within it. For the trees and bushes, we applied role="img" so the screenreader would skip it and any of the more detailed stations we applied a unique id and title, which was the first element within the SVG. We also applied role="presentation".

<svg ... role="presentation" aria-labelledby="analyticsuklaunch" > <title id="analyticsuklaunch">Launch of analytics</title>

I learned a lot of this from this article by Heather Migliorisi, and this great article by Leonie Watson.

The text within the SVG does announce itself as you tab through the page, and the link is found, all of the text is read. This is what that text component looks like, with those slots mentioned above.

<template> <a :href="`https://www.netlify.com/blog/2020/08/03/netlify-milestones-on-the-road-to-1-million-devs/#${urlSlug}`" > <svg xmlns="http://www.w3.org/2000/svg" width="450" height="250" :x="svgCoords.x" :y="svgCoords.y" viewBox="0 0 280 115.4" > <g :class="`textnode text${num}`"> <text class="d" transform="translate(7.6 14)"> <slot name="date">Jul 13, 2016</slot> </text> <text class="e" transform="translate(16.5 48.7)"> <slot name="event">Something here</slot> </text> <text class="e" transform="translate(16.5 70)"> <slot name="event2" /> </text> <text class="h" transform="translate(164.5 104.3)">View Milestone</text> </g> </svg> </a> </template>

Here’s a video of what this sounds like if I tab through the SVG on my Mac:

If you have further suggestions for improvement please let us know!

The repo is also open source if you want to check out the code or file a PR.

Thanks a million (pun intended) to my coworkers Zach Leatherman and Hugues Tennier who worked on this with me, their input and work was invaluable to the project, it only exists from teamwork to get it over the line! And so much respect to Alejandro Alvarez who did the design, and did a spectacular job. High fives all around. &#x1f64c;

The post The Making of: Netlify’s Million Devs SVG Animation Site appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Lightweight Masonry Solution

Css Tricks - Mon, 08/03/2020 - 4:00am

Back in May, I learned about Firefox adding masonry to CSS grid. Masonry layouts are something I’ve been wanting to do on my own from scratch for a very long time, but have never known where to start. So, naturally, I checked the demo and then I had a lightbulb moment when I understood how this new proposed CSS feature works.

Support is obviously limited to Firefox for now (and, even there, only behind a flag), but it still offered me enough of a starting point for a JavaScript implementation that would cover browsers that currently lack support.

The way Firefox implements masonry in CSS is by setting either grid-template-rows (as in the example) or grid-template-columns to a value of masonry.

My approach was to use this for supporting browsers (which, again, means just Firefox for now) and create a JavaScript fallback for the rest. Let’s look at how this works using the particular case of an image grid.

First, enable the flag

In order to do this, we go to about:config in Firefox and search for “masonry.” This brings up the layout.css.grid-template-masonry-value.enabled flag, which we enable by double clicking its value from false (the default) to true.

Making sure we can test this feature. Let’s start with some markup

The HTML structure looks something like this:

<section class="grid--masonry"> <img src="black_cat.jpg" alt="black cat" /> <!-- more such images following --> </section> Now, let’s apply some styles

The first thing we do is make the top-level element a CSS grid container. Next, we define a maximum width for our images, let’s say 10em. We also want these images to shrink to whatever space is available for the grid’s content-box if the viewport becomes too narrow to accommodate for a single 10em column grid, so the value we actually set is Min(10em, 100%). Since responsivity is important these days, we don’t bother with a fixed number of columns, but instead auto-fit as many columns of this width as we can:

$w: Min(10em, 100%); .grid--masonry { display: grid; grid-template-columns: repeat(auto-fit, $w); > * { width: $w; } }

Note that we’ve used Min() and not min() in order to avoid a Sass conflict.

CodePen Embed Fallback

Well, that’s a grid!

Not a very pretty one though, so let’s force its content to be in the middle horizontally, then add a grid-gap and padding that are both equal to a spacing value ($s). We also set a background to make it easier on the eyes.

$s: .5em; /* masonry grid styles */ .grid--masonry { /* same styles as before */ justify-content: center; grid-gap: $s; padding: $s } /* prettifying styles */ html { background: #555 } CodePen Embed Fallback

Having prettified the grid a bit, we turn to doing the same for the grid items, which are the images. Let’s apply a filter so they all look a bit more uniform, while giving a little additional flair with slightly rounded corners and a box-shadow.

img { border-radius: 4px; box-shadow: 2px 2px 5px rgba(#000, .7); filter: sepia(1); } CodePen Embed Fallback

The only thing we need to do now for browsers that support masonry is to declare it:

.grid--masonry { /* same styles as before */ grid-template-rows: masonry; }

While this won’t work in most browsers, it produces the desired result in Firefox with the flag enabled as explained earlier.

grid-template-rows: masonry working in Firefox with the flag enabled (Demo).

But what about the other browsers? That’s where we need a…

JavaScript fallback

In order to be economical with the JavaScript the browser has to run, we first check if there are any .grid--masonry elements on that page and whether the browser has understood and applied the masonry value for grid-template-rows. Note that this is a generic approach that assumes we may have multiple such grids on a page.

let grids = [...document.querySelectorAll('.grid--masonry')]; if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { console.log('boo, masonry not supported &#x1f62d;') } else console.log('yay, do nothing!') Support test (live).

If the new masonry feature is not supported, we then get the row-gap and the grid items for every masonry grid, then set a number of columns (which is initially 0 for each grid).

let grids = [...document.querySelectorAll('.grid--masonry')]; if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { grids = grids.map(grid => ({ _el: grid, gap: parseFloat(getComputedStyle(grid).gridRowGap), items: [...grid.childNodes].filter(c => c.nodeType === 1), ncol: 0 })); grids.forEach(grid => console.log(`grid items: ${grid.items.length}; grid gap: ${grid.gap}px`)) }

Note that we need to make sure the child nodes are element nodes (which means they have a nodeType of 1). Otherwise, we can end up with text nodes consisting of carriage returns in the array of items.

Checking we got the correct number of items and gap (live).

Before proceeding further, we have to ensure the page has loaded and the elements aren’t still moving around. Once we’ve handled that, we take each grid and read its current number of columns. If this is different from the value we already have, then we update the old value and rearrange the grid items.

if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { grids = grids.map(/* same as before */); function layout() { grids.forEach(grid => { /* get the post-resize/ load number of columns */ let ncol = getComputedStyle(grid._el).gridTemplateColumns.split(' ').length; if(grid.ncol !== ncol) { grid.ncol = ncol; console.log('rearrange grid items') } }); } addEventListener('load', e => { layout(); /* initial load */ addEventListener('resize', layout, false) }, false); }

Note that calling the layout() function is something we need to do both on the initial load and on resize.

When we need to rearrange grid items (live).

To rearrange the grid items, the first step is to remove the top margin on all of them (this may have been set to a non-zero value to achieve the masonry effect before the current resize).

If the viewport is narrow enough that we only have one column, we’re done!

Otherwise, we skip the first ncol items and we loop through the rest. For each item considered, we compute the position of the bottom edge of the item above and the current position of its top edge. This allows us to compute how much we need to move it vertically such that its top edge is one grid gap below the bottom edge of the item above.

/* if the number of columns has changed */ if(grid.ncol !== ncol) { /* update number of columns */ grid.ncol = ncol; /* revert to initial positioning, no margin */ grid.items.forEach(c => c.style.removeProperty('margin-top')); /* if we have more than one column */ if(grid.ncol > 1) { grid.items.slice(ncol).forEach((c, i) => { let prev_fin = grid.items[i].getBoundingClientRect().bottom /* bottom edge of item above */, curr_ini = c.getBoundingClientRect().top /* top edge of current item */; c.style.marginTop = `${prev_fin + grid.gap - curr_ini}px` }) } }

We now have a working, cross-browser solution!

CodePen Embed Fallback A couple of minor improvements A more realistic structure

In a real world scenario, we’re more likely to have each image wrapped in a link to its full size so that the big image opens in a lightbox (or we navigate to it as a fallback).

<section class='grid--masonry'> <a href='black_cat_large.jpg'> <img src='black_cat_small.jpg' alt='black cat'/> </a> <!-- and so on, more thumbnails following the first --> </section>

This means we also need to alter the CSS a bit. While we don’t need to explicitly set a width on the grid items anymore — as they’re now links — we do need to set align-self: start on them because, unlike images, they stretch to cover the entire row height by default, which will throw off our algorithm.

.grid--masonry > * { align-self: start; } img { display: block; /* avoid weird extra space at the bottom */ width: 100%; /* same styles as before */ } CodePen Embed Fallback Making the first element stretch across the grid

We can also make the first item stretch horizontally across the entire grid (which means we should probably also limit its height and make sure the image doesn’t overflow or get distorted):

.grid--masonry > :first-child { grid-column: 1/ -1; max-height: 29vh; } img { max-height: inherit; object-fit: cover; /* same styles as before */ }

We also need to exclude this stretched item by adding another filter criterion when we get the list of grid items:

grids = grids.map(grid => ({ _el: grid, gap: parseFloat(getComputedStyle(grid).gridRowGap), items: [...grid.childNodes].filter(c => c.nodeType === 1 && +getComputedStyle(c).gridColumnEnd !== -1 ), ncol: 0 })); CodePen Embed Fallback Handling grid items with variable aspect ratios

Let’s say we want to use this solution for something like a blog. We keep the exact same JS and almost the exact same masonry-specific CSS – we only change the maximum width a column may have and drop the max-height restriction for the first item.

As it can be seen from the demo below, our solution also works perfectly in this case where we have a grid of blog posts:

CodePen Embed Fallback

You can also resize the viewport to see how it behaves in this case.

However, if we want the width of the columns to be somewhat flexible, for example, something like this:

$w: minmax(Min(20em, 100%), 1fr)

Then we have a problem on resize:

The changing width of the grid items combined with the fact that the text content is different for each means that when a certain threshold is crossed, we may get a different number of text lines for a grid item (thus changing the height), but not for the others. And if the number of columns doesn’t change, then the vertical offsets don’t get recomputed and we end up with either overlaps or bigger gaps.

In order to fix this, we need to also recompute the offsets whenever at least one item’s height changes for the current grid. This means we need to also need to test if more than zero items of the current grid have changed their height. And then we need to reset this value at the end of the if block so that we don’t rearrange the items needlessly next time around.

if(grid.ncol !== ncol || grid.mod) { /* same as before */ grid.mod = 0 }

Alright, but how do we change this grid.mod value? My first idea was to use a ResizeObserver:

if(grids.length && getComputedStyle(grids[0]).gridTemplateRows !== 'masonry') { let o = new ResizeObserver(entries => { entries.forEach(entry => { grids.find(grid => grid._el === entry.target.parentElement).mod = 1 }); }); /* same as before */ addEventListener('load', e => { /* same as before */ grids.forEach(grid => { grid.items.forEach(c => o.observe(c)) }) }, false) }

This does the job of rearranging the grid items when necessary even if the number of grid columns doesn’t change. But it also makes even having that if condition pointless!

This is because it changes grid.mod to 1 whenever the height or the width of at least one item changes. The height of an item changes due to the text reflow, caused by the width changing. But the change in width happens every time we resize the viewport and doesn’t necessarily trigger a change in height.

This is why I eventually decided on storing the previous item heights and checking whether they have changed on resize to determine whether grid.mod remains 0 or not:

function layout() { grids.forEach(grid => { grid.items.forEach(c => { let new_h = c.getBoundingClientRect().height; if(new_h !== +c.dataset.h) { c.dataset.h = new_h; grid.mod++ } }); /* same as before */ }) } CodePen Embed Fallback

That’s it! We now have a nice lightweight solution. The minified JavaScript is under 800 bytes, while the strictly masonry-related styles are under 300 bytes.

But, but, but… What about browser support?

Well, @supports just so happens to have better browser support than any of the newer CSS features used here, so we can put the nice stuff inside it and have a basic, non-masonry grid for non-supporting browsers. This version works all the way back to IE9.

The result in Internet Explorer

It may not look the same, but it looks decent and it’s perfectly functional. Supporting a browser doesn’t mean replicating all the visual candy for it. It means the page works and doesn’t look broken or horrible.

What about the no JavaScript case?

Well, we can apply the fancy styles only if the root element has a js class which we add via JavaScript! Otherwise, we get a basic grid where all the items have the same size.

The no JavaScript result (Demo).

The post A Lightweight Masonry Solution appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

10 modern layouts in 1 line of CSS

Css Tricks - Fri, 07/31/2020 - 8:09am

Una doing an amazing job of showing just how (dare I say it?) easy CSS layout has gotten. There is plenty to learn, but what you learn makes sense, and once you have, it’s quite empowering.

The demos are all together here.

Direct Link to ArticlePermalink

The post 10 modern layouts in 1 line of CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.