Tags: 3d

89

sparkline

Monday, July 4th, 2022

Checked in at Clery's. Pipers galore! — with Jessica map

Checked in at Clery’s. Pipers galore! — with Jessica

Sunday, March 6th, 2022

Both plagues on your one house

February is a tough month at the best of times. A February during The Situation is particularly grim.

At least in December you get Christmas, whose vibes can even carry you through most of January. But by the time February rolls around, it’s all grim winteriness with no respite in sight.

In the middle of February, Jessica caught the ’rona. On the bright side, this wasn’t the worst timing: if this had happened in December, our Christmas travel plans to visit family would’ve been ruined. On the not-so-bright side, catching a novel coronavirus is no fun.

Still, the vaccines did their job. Jessica felt pretty crap for a couple of days but was on the road to recovery before too long.

Amazingly, I did not catch the ’rona. We slept in separate rooms, but still, we were spending most of our days together in the same small flat. Given the virulence of The Omicron Variant, I’m counting my blessings.

But just in case I got any ideas about having some kind of superhuman immune system, right after Jessica had COVID-19, I proceeded to get gastroenteritis. I’ll spare you the details, but let me just say it was not pretty.

Amazingly, Jessica did not catch it. I guess two years of practicing intense hand-washing pays off when a stomach bug comes a-calling.

So all in all, not a great February, even by February’s already low standards.

The one bright spot that I get to enjoy every February is my birthday, just as the month is finishing up. Last year I spent my birthday—the big five oh—in lockdown. But two years ago, right before the world shut down, I had a lovely birthday weekend in Galway. This year, as The Situation began to unwind and de-escalate, I thought it would be good to reprieve that birthday trip.

We went to Galway. We ate wonderful food at Aniar. We listened to some great trad music. We drink some pints. It was good.

But it was hard to enjoy the trip knowing what was happening elsewhere in Europe. I’d blame February for being a bastard again, but in this case the bastard is clearly Vladimar Putin. Fucker.

Just as it’s hard to switch off for a birthday break, it’s equally challenging to go back to work and continue as usual. It feels very strange to be spending the days working on stuff that clearly, in the grand scheme of things, is utterly trivial.

I take some consolation in the fact that everyone else feels this way too, and everyone is united in solidarity with Ukraine. (There are some people in my social media timelines who also feel the need to point out that other countries have been invaded and bombed too. I know it’s not their intention but there’s a strong “all lives matter” vibe to that kind of whataboutism. Hush.)

Anyway. February’s gone. It’s March. Things still feel very grim indeed. But perhaps, just perhaps, there’s a hint of Spring in the air. Winter will not last forever.

Tuesday, January 4th, 2022

Plus Equals #4, December 2021

In which Rob takes a deep dive into isometric projection and then gets generative with it.

Wednesday, October 27th, 2021

Checked in at Jolly Brewer. It’s been quite a while since we last had a session here! — with Jessica map

Checked in at Jolly Brewer. It’s been quite a while since we last had a session here! — with Jessica

Friday, October 22nd, 2021

Checked in at Clayton Crown Hotel. Return To London Town 🎶🎻 — with Jessica map

Checked in at Clayton Crown Hotel. Return To London Town 🎶🎻 — with Jessica

Thursday, September 23rd, 2021

Checked in at Sadler's Wells. Opening night of Akram Khan’s Creature! — with Jessica map

Checked in at Sadler’s Wells. Opening night of Akram Khan’s Creature! — with Jessica

Tuesday, September 14th, 2021

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Tuesday, August 3rd, 2021

Facebook Container for Firefox

Firefox has a nifty extension—made by Mozilla—called Facebook Container. It does two things.

First of all, it sandboxes any of your activity while you’re on the facebook.com domain. The tab you’re in is isolated from all others.

Secondly, when you visit a site that loads a tracker from Facebook, the extension alerts you to its presence. For example, if a page has a share widget that would post to Facebook, a little fence icon appears over the widget warning you that Facebook will be able to track that activity.

It’s a nifty extension that I’ve been using for quite a while. Except now it’s gone completely haywire. That little fence icon is appearing all over the web wherever there’s a form with an email input. See, for example, the newsletter sign-up form in the footer of the Clearleft site. It’s happening on forms over on The Session too despite the rigourous-bordering-on-paranoid security restrictions in place there.

Hovering over the fence icon displays this text:

If you use your real email address here, Facebook may be able to track you.

That is, of course, false. It’s also really damaging. One of the worst things that you can do in the security space is to cry wolf. If a concerned user is told that they can ignore that warning, you’re lessening the impact of all warnings, even serious legitimate ones.

Sometimes false positives are an acceptable price to pay for overall increased security, but in this case, the rate of false positives can only decrease trust.

I tried to find out how to submit a bug report about this but I couldn’t work it out (and I certainly don’t want to file a bug report in a review) so I’m writing this in the hopes that somebody at Mozilla sees it.

What’s really worrying is that this might not be considered a bug. The release notes for the version of the extension that came out last week say:

Email fields will now show a prompt, alerting users about how Facebook can track users by their email address.

Like …all email fields? That’s ridiculous!

I thought the issue might’ve been fixed in the latest release that came out yesterday. The release notes say:

This release addresses fixes a issue from our last release – the email field prompt now only displays on sites where Facebook resources have been blocked.

But the behaviour is unfortunately still there, even on sites like The Session or Clearleft that wouldn’t touch Facebook resources with a barge pole. The fence icon continues to pop up all over the web.

I hope this gets sorted soon. I like the Facebook Container extension and I’d like to be able to recommend it to other people. Right now I’d recommed the opposite—don’t install this extension while it’s behaving so overzealously. If the current behaviour continues, I’ll be uninstalling this extension myself.

Update: It looks like a fix is being rolled out. Fingers crossed!

Sunday, May 30th, 2021

Checked in at Fox On the Downs. A pint of plain in the sun — with Jessica map

Checked in at Fox On the Downs. A pint of plain in the sun — with Jessica

Wednesday, March 17th, 2021

Remote work on the Clearleft podcast

The sixth episode of season two of the Clearleft podcast is available now. The last episode of the season!

The topic is remote work. The timing is kind of perfect. It was exactly one year ago today that Clearleft went fully remote. Having a podcast episode to mark the anniversary seems fitting.

I didn’t interview anyone specifically for this episode. Instead, whenever I was chatting to someone about some other topic—design systems, prototyping, or whatever—I’d wrap up by asking them to describe their surroundings and ask them how they were adjusting to life at home. After two season’s worth of interviews, I had a decent library of responses. So this episode includes voices you last heard from back in season one: Paul, Charlotte, Amy, and Aarron.

Then the episode shifts. I’ve got excerpts from a panel discussion we held a while back on the future of work. These panel discussions used to happen up in London, but this one was, obviously, online. It’s got a terrific line-up: Jean, Holly, Emma, and Lola, all dialing in from different countries and all sharing their stories openly and honestly. (Fun fact: I first met Lola three years ago at the Pixel Up conference in South Africa and on this day in 2018 we were out on Safari together.)

I’m happy with how this episode turned out. It’s a fitting finish to the season. It’s just seventeen and a half minutes long so take a little time out of your day to have a listen.

As always, if you like what you hear, please spread the word.

Tuesday, March 9th, 2021

Content buddy

One of my roles at Clearleft is “content buddy.” If anyone is writing a talk, or a blog post, or a proposal and they want an extra pair of eyes on it, I’m there to help.

Sometimes a colleague will send a link to a Google Doc where they’ve written an article. I can then go through it and suggest changes. Using the “suggest” mode rather than the “edit” mode in Google Docs means that they can accept or reject each suggestion later.

But what works better—and is far more fun—is if we arrange to have a video call while we both have the Google Doc open in our browsers. That way, instead of just getting the suggestions, we can talk through the reasoning behind each one. It feels more like teaching them to fish instead of giving them a grammatically correct fish.

Some of the suggestions are very minor; punctuation, capitalisation, stuff like that. Where it gets really interesting is trying to figure out and explain why some sentence constructions feel better than others.

A fairly straightforward example is long sentences. Not all long sentences are bad, but the longer a sentence gets, the more it runs the risk of overwhelming the reader. So if there’s an opportunity to split one long sentence into two shorter sentences, I’ll usually recommend that.

Here’s an example from Chris’s post, Delivering training remotely – the same yet different. The original sentence read:

I recently had the privilege of running some training sessions on product design and research techniques with the design team at Duck Duck Go.

There’s nothing wrong with that. But maybe this is a little easier to digest:

I recently had the privilege of running some training sessions with the design team at Duck Duck Go. We covered product design and research techniques.

Perhaps this is kind of like the single responsibility principle in programming. Whereas the initial version was one sentence that conveyed two pieces of information (who the training was with and what the training covered), the final version has a separate sentence for each piece of information.

I wouldn’t take that idea too far though. Otherwise you’d end up with something quite stilted and robotic.

Speaking of sounding robotic, I’ve noticed that people sometimes avoid using contractions when they’re writing online: “there is” instead of “there’s” or “I am” instead of “I’m.” Avoiding contractions seems to be more professional, but actually it makes the writing a bit too formal. There’s a danger of sounding like a legal contract. Or a Vulcan.

Sometimes a long sentence can’t be broken down into shorter sentences. In that case, I watch out for how much cognitive load the sentence is doling out to the reader.

Here’s an example from Maite’s post, How to engage the right people when recruiting in house for research. One sentence initially read:

The relevance of the people you invite to participate in a study and the information they provide have a great impact on the quality of the insights that you get.

The verb comes quite late there. As a reader, until I get to “have a great impact”, I have to keep track of everything up to that point. Here’s a rephrased version:

The quality of the insights that you get depends on the relevance of the people you invite to participate in a study and the information they provide.

Okay, there are two changes there. First of all, the verb is now “depends on” instead of “have a great impact on.” I think that’s a bit clearer. Secondly, the verb comes sooner. Now I only have to keep track of the words up until “depends on”. After that, I can flush my memory buffer.

Here’s another changed sentence from the same article. The initial sentence read:

You will have to communicate at different times and for different reasons with your research participants.

I suggested changing that to:

You will have to communicate with your research participants at different times and for different reasons.

To be honest, I find it hard to explain why that second version flows better. I think it’s related to the idea of reducing dependencies. The subject “your research participants” is dependent on the verb “to communicate with.” So it makes more sense to keep them together instead of putting a subclause between them. The subclause can go afterwards instead: “at different times and for different reasons.”

Here’s one final example from Katie’s post, Service Designers don’t design services, we all do. One sentence initially read:

Understanding the relationships between these moments, digital and non-digital, and designing across and between these moments is key to creating a compelling user experience.

That sentence could be broken into shorter sentences, but it might lose some impact. Still, it can be rephrased so the reader doesn’t have to do as much work. As it stands, until the reader gets to “is key to creating”, they have to keep track of everything before that. It’s like the feeling of copying and pasting. If you copy something to the clipboard, you want to paste it as soon as possible. The longer you have to hold onto it, the more uncomfortable it feels.

So here’s the reworked version:

The key to creating a compelling user experience is understanding the relationships between these moments, digital and non-digital, and designing across and between these moments.

As a reader, I can digest and discard each of these pieces in turn:

  1. The key to creating a compelling user experience is…
  2. understanding the relationships between these moments…
  3. digital and non-digital…
  4. and…
  5. designing across and between these moments.

Maybe I should’ve suggested “between these digital and non-digital moments” instead of “between these moments, digital and non-digital”. But then I worry that I’m intruding on the author’s style too much. With the finished sentence, it still feels like a rousing rallying cry in Katie’s voice, but slightly adjusted to flow a little easier.

I must say, I really, really enjoy being a content buddy. I know the word “editor” would be the usual descriptor, but I like how unintimidating “content buddy” sounds.

I am almost certainly a terrible content buddy to myself. Just as I ignore my own advice about preparing conference talks, I’m sure I go against my own editorial advice every time I blurt out a blog post here. But there’s one piece I’ve given to others that I try to stick to: write like you speak.

Friday, February 19th, 2021

Design engineer

It’s been just over two years since Chris wrote his magnum opus about The Great Divide. It really resonated with me, and a lot of other people.

The crux of it is that the phrase “front-end development” has become so broad and applies to so many things, that it has effectively lost its usefulness:

Two front-end developers are sitting at a bar. They have nothing to talk about.

Brad nailed the differences in responsibilities when he described them as front-of-the-front-end and back-of-the-front-end web development:

A front-of-the-front-end developer is a web developer who specializes in writing HTML, CSS, and presentational JavaScript code.

A back-of-the-front-end developer is a web developer who specializes in writing JavaScript code necessary to make a web application function properly.

In my experience, the term “full stack developer” is often self-applied by back-of-the-front-end developers who perhaps underestimate the complexity of front-of-the-front development.

Me, I’m very much a front-of-the-front developer. And the dev work we do at Clearleft very much falls into that realm.

This division of roles and responsibilities reminds me of a decision we made in the founding days of Clearleft. Would we attempt to be a full-service agency, delivering everything from design to launch? Or would we specialise? We decided to specialise, doubling down on UX design, which was at the time an under-served area. But we still decided to do front-end development. We felt that working with the materials of the web would allow us to deliver better UX.

We made a conscious decision not to do back-end development. Partly it was a question of scale. If you were a back-end shop, you probably had to double down on one stack: PHP or Ruby or Python. We didn’t want to have to turn away any clients based on their tech stack. Of course this meant that we had to partner with other agencies that specialised in those stacks, and that’s what we did—we had trusted partners for Drupal development, Rails development, Wordpress development, and so on.

The world of front-end development didn’t have that kind of fragmentation. The only real split at the time was between Flash agencies and web standards (HTML, CSS, and JavaScript).

Overall, our decision to avoid back-end development stood us in good stead. There were plenty of challenges though. We had to learn how to avoid “throwing stuff over the wall” at whoever would be doing the final back-end implementation. I think that’s why we latched on to design systems so early. It was clearly a better deliverable for the people building the final site—much better than mock-ups or pages.

Avoiding back-end development meant we also avoided long-term lock-in with maintainence, security, hosting, and so on. It might sound strange for an agency to actively avoid long-term revenue streams, but at Clearleft it’s always been our philosophy to make ourselves redundant. We want to give our clients everything they need—both in terms of deliverables and knowledge—so that they aren’t dependent on us.

That all worked great as long as there was a clear distinction between front-end development and back-end development. Front-end development was anything that happened in a browser. Back-end development was anything that happened on the server.

But as the waters muddied and complex business logic migrated from the server to the client, our offering became harder to clarify. We’d tell clients that we did front-end development (meaning we’d supply them with components formed of HTML, CSS, and JavaScript) and they might expect us to write application logic in React.

That’s why Brad’s framing resonated with me. Clearleft does front-of-front-end development, but we liaise with our clients’ back-of-the-front-end developers. In fact, that bridging work—between design and implementation—is where devs at Clearleft shine.

As much as I can relate to the term front-of-front-end, it doesn’t exactly roll off the tongue. I don’t expect it to be anyone’s job title anytime soon.

That’s why I was so excited by the term “design engineer,” which I think I first heard from Natalya Shelburne. There’s even a book about it and the job description sounds very much like the front-of-the-front-end work but with a heavy emphasis on the collaboration and translation between design and implementation. As Trys puts it:

What I love about the name “Design Engineer”, is that it’s entirely focused on the handshake between those two other roles.

There’s no mention of UI, CSS, front-end, design systems, documentation, prototyping, tooling or any ‘hard’ skills that could be used in the role itself.

Trys has been doing some soul-searching and has come to the conclusion “I think I might be a design engineer…”. He has also written on the Clearleft blog about how well the term describes design and development at Clearleft.

Personally, I’m not a fan of using the term “engineer” to refer to anyone who isn’t actually a qualified engineer—I explain why in my talk Building—but I accept that that particular ship has sailed. And the term “design developer” just sounds odd. So I’m all in using the term “design engineer”.

I can imagine this phrase being used in a job ad. It could also be attached to levels: a junior design engineer, a mid-level design engineer, a senior design engineer; each level having different mixes of code and collaboration (maybe a head of design enginering never writes any code).

Trys has written a whole series of posts on the nitty-gritty work involved in design engineering. I highly recommend reading all of them:

Thursday, September 24th, 2020

Performance and people

I was helping a client with a bit of a performance audit this week. I really, really enjoy this work. It’s such a nice opportunity to get my hands in the soil of a website, so to speak, and suggest changes that will have a measurable effect on the user’s experience.

Not only is web performance a user experience issue, it may well be the user experience issue. Page speed has a proven demonstrable direct effect on user experience (and revenue and customer satisfaction and whatever other metrics you’re using).

It struck me that there’s a continuum of performance challenges. On one end of the continuum, you’ve got technical issues. These can be solved with technical solutions. On the other end of the continuum, you’ve got human issues. These can be solved with discussions, agreement, empathy, and conversations (often dreaded or awkward).

I think that, as developers, we tend to gravitate towards the technical issues. That’s our safe space. But I suspect that bigger gains can be reaped by tackling the uncomfortable human issues.

This week, for example, I uncovered three performance issues. One was definitely technical. One was definitely human. One was halfway between.

The technical issue was with web fonts. It’s a lot of fun to dive into this aspect of web performance because quite often there’s some low-hanging fruit: a relatively simple technical fix that will boost the performance (or perceived performance) of a website. That might be through resource hints (using link rel=“preload” in the HTML) or adjusting the font loading (using font-display in the CSS) or even nerdier stuff like subsetting.

In this case, the issue was with the file format of the font itself. By switching to woff2, there were significant file size savings. And the great thing is that @font-face rules allow you to specify multiple file formats so you can still support older browsers that can’t handle woff2. A win all ‘round!

The performance issue that was right in the middle of the technical/human continuum was with images. At first glance it looked like a similar issue to the fonts. Some images were being served in the wrong formats. When I say “wrong”, I guess I mean inappropriate. A photographic image, for example, is probably going to best served as a JPG rather than a PNG.

But unlike the fonts, the images weren’t in the direct control of the developers. These images were coming from a Content Management System. And while there’s a certain amount of processing you can do on the server, a human still makes the decision about what file format they’re uploading.

I’ve seen this happen at Clearleft. We launched an event site with lean performant code, but then someone uploaded an image that’s megabytes in size. The solution in that case wasn’t technical. We realised there was a knowledge gap around image file formats—which, let’s face it, is kind of a techy topic that most normal people shouldn’t be expected to know.

But it was extremely gratifying to see that people were genuinely interested in knowing a bit more about choosing the right format for the right image. I was able to provide a few rules of thumb and point to free software for converting images. It empowered those people to feel more confident using the Content Management System.

It was a similar situation with the client site I was looking at this week. Nobody is uploading oversized images in order to deliberately make the site slower. They probably don’t realise the difference that image formats can make. By having a discussion and giving them some pointers, they’ll have more knowledge and the site will be faster. Another win all ‘round!

At the other end of continuum was an issue that wasn’t technical. From a technical point of view, there was just one teeny weeny little script. But that little script is Google Tag Manager which then calls many, many other scripts that are not so teeny weeny. Third party scripts …the bane of web performance!

In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!

Remember when I did a countdown of the top four web performance challenges? At the number one spot is other people’s JavaScript.

Now one technical solution would be to remove the Google Tag Manager script. But that’s probably not very practical—you’ll probably just piss off some other department. That said, if you can’t find out which department was responsible for adding the Google Tag Manager script in the first place, it might we well be an option to remove it and then wait and see who complains. If no one notices it’s gone, job done!

More realistically, there’s someone who’s added that Google Tag Manager script for their own valid reasons. You’ll need to talk to them and understand their needs.

Again, as with images uploaded in a Content Management System, they may not be aware of the performance problems caused by third-party scripts. You could try throwing numbers at them, but I think you get better results by telling the story of performance.

Use tools like Request Map Generator to help them visualise the impact that third-party scripts are having. Talk to them. More importantly, listen to them. Find out why those scripts are being requested. What are the outcomes they’re working towards? Can you offer an alternative way of providing the data they need?

I think many of us developers are intimidated or apprehensive about approaching people to have those conversations. But it’s necessary. And in its own way, it can be as rewarding as tinkering with code. If the end result is a faster website, then the work is definitely worth doing—whether it’s technical work or people work.

Personally, I just really enjoy working on anything that will end up improving a website’s performance, and by extension, the user experience. If you fancy working with me on your site, you should get in touch with Clearleft.

Thursday, August 13th, 2020

BetrayURL

Back in February, I wrote about an excellent proposal by Jake for how browsers could display URLs in a safer way. Crucially, this involved highlighting the important part of the URL, but didn’t involve hiding any part. It’s a really elegant solution.

Turns out it was a Trojan horse. Chrome are now running an experiment where they will do the exact opposite: they will hide parts of the URL instead of highlighting the important part.

You can change this behaviour if you’re in the less than 1% of people who ever change default settings in browsers.

I’m really disappointed to see that Jake’s proposal isn’t going to be implemented. It was a much, much better solution.

No doubt I will hear rejoinders that the “solution” that Chrome is experimenting with is pretty similar to what Jake proposed. Nothing could be further from the truth. Jake’s solution empowered users with knowledge without taking anything away. What Chrome will be doing is the opposite of that, infantalising users and making decisions for them “for their own good.”

Seeing a complete URL is going to become a power-user feature, like View Source or user style sheets.

I’m really sad about that because, as Jake’s proposal demonstrates, it doesn’t have to be that way.

Thursday, August 6th, 2020

Friday, July 31st, 2020

Checked in at Pelicano. Iced latte — with Jessica map

Checked in at Pelicano. Iced latte — with Jessica

Friday, July 17th, 2020

Checked in at Pelicano. Iced latte — with Jessica map

Checked in at Pelicano. Iced latte — with Jessica

Thursday, July 16th, 2020

Hey now

Progressive enhancement is at the heart of everything I do on the web. It’s the bedrock of my speaking and writing too. Whether I’m writing about JavaScript, Ajax, HTML, or service workers, it’s always through the lens of progressive enhancement. Sometimes I explicitly bang the drum, like with Resilient Web Design. Other times I don’t mention it by name at all, and instead talk only about its benefits.

I sometimes get asked to name some examples of sites that still offer their core functionality even when JavaScript fails. I usually mention Amazon.com, although that has other issues. But quite often I find that a lot of the examples I might mention are dismissed as not being “web apps” (whatever that means).

The pushback I get usually takes the form of “Well, that approach is fine for websites, but it wouldn’t work something like Gmail.”

It’s always Gmail. Which is odd. Because if you really wanted to flummox me with a product or service that defies progressive enhancement, I’d have a hard time with something like, say, a game (although it would be pretty cool to build a text adventure that’s progressively enhanced into a first-person shooter). But an email client? That would work.

Identify core functionality.

Read emails. Write emails.

Make that functionality available using the simplest possible technology.

HTML for showing a list of emails, HTML for displaying the contents of the HTML, HTML for the form you write the response in.

Enhance!

Now add all the enhancements that improve the experience—keyboard shortcuts; Ajax instead of full-page refreshes; local storage, all that stuff.

Can you build something that works just like Gmail without using any JavaScript? No. But that’s not what progressive enhancement is about. It’s about providing the core functionality (reading and writing emails) with the simplest possible technology (HTML) and then enhancing using more powerful technologies (like JavaScript).

Progressive enhancement isn’t about making a choice between using simpler more robust technologies or using more advanced features; it’s about using simpler more robust technologies and then using more advanced features. Have your cake and eat it.

Fortunately I no longer need to run this thought experiment to imagine what it would be like if something like Gmail were built with a progressive enhancement approach. That’s what HEY is.

Sam Stephenson describes the approach they took:

HEY’s UI is 100% HTML over the wire. We render plain-old HTML pages on the server and send them to your browser encoded as text/html. No JSON APIs, no GraphQL, no React—just form submissions and links.

If you think that sounds like the web of 25 years ago, you’re right! Except the HEY front-end stack progressively enhances the “classic web” to work like the “2020 web,” with all the fidelity you’d expect from a well-built SPA.

See? It’s not either resilient or modern—it’s resilient and modern. Have your cake and eat it.

And yet this supremely sensible approach is not considered “modern” web development:

The architecture astronauts who, for the past decade, have been selling us on the necessity of React, Redux, and megabytes of JS, cannot comprehend the possibility of building an email app in 2020 with server-rendered HTML.

HEY isn’t perfect by any means—they’ve got a lot of work to do on their accessibility. But it’s good to have a nice short answer to the question “But what about something like Gmail?”

It reminds me of responsive web design:

When Ethan Marcotte demonstrated the power of responsive design, it was met with resistance. “Sure, a responsive design might work for a simple personal site but there’s no way it could scale to a large complex project.”

Then the Boston Globe launched its responsive site. Microsoft made their homepage responsive. The floodgates opened again.

It’s a similar story today. “Sure, progressive enhancement might work for a simple personal site, but there’s no way it could scale to a large complex project.”

The floodgates are ready to open. We just need you to create the poster child for resilient web design.

It looks like HEY might be that poster child.

I have to wonder if its coincidence or connected that this is a service that’s also tackling ethical issues like tracking? Their focus is very much on people above technology. They’ve taken a human-centric approach to their product and a human-centric approach to web development …because ultimately, that’s what progressive enhancement is.

Thursday, June 11th, 2020

CSS custom properties and the cascade

When I wrote about programming CSS to perform Sass colour functions I said this about the brilliant Lea Verou:

As so often happens when I’m reading something written by Lea—or seeing her give a talk—light bulbs started popping over my head (my usual response to Lea’s knowledge bombs is either “I didn’t know you could do that!” or “I never thought of doing that!”).

Well, it happened again. This time I was reading her post about hybrid positioning with CSS variables and max() . But the main topic of the post wasn’t the part that made go “Huh! I never knew that!”. Towards the end of her article she explained something about the way that browsers evaluate CSS custom properties:

The browser doesn’t know if your property value is valid until the variable is resolved, and by then it has already processed the cascade and has thrown away any potential fallbacks.

I’m used to being able to rely on the cascade. Let’s say I’m going to set a background colour on paragraphs:

p {
  background-color: red;
  background-color: color(display-p3 1 0 0);
}

First I’ve set a background colour using a good ol’ fashioned keyword, supported in browsers since day one. Then I declare the background colour using the new-fangled color() function which is supported in very few browsers. That’s okay though. I can confidently rely on the cascade to fall back to the earlier declaration. Paragraphs will still have a red background colour.

But if I store the background colour in a custom property, I can no longer rely on the cascade.

:root {
  --myvariable: color(display-p3 1 0 0);
}
p {
  background-color: red;
  background-color: var(--myvariable);
}

All I’ve done is swapped out the hard-coded color() value for a custom property but now the browser behaves differently. Instead of getting a red background colour, I get the browser default value. As Lea explains:

…it will make the property invalid at computed value time.

The spec says:

When this happens, the computed value of the property is either the property’s inherited value or its initial value depending on whether the property is inherited or not, respectively, as if the property’s value had been specified as the unset keyword.

So if a browser doesn’t understand the color() function, it’s as if I’ve said:

background-color: unset;

This took me by surprise. I’m so used to being able to rely on the cascade in CSS—it’s one of the most powerful and most useful features in this programming language. Could it be, I wondered, that the powers-that-be have violated the principle of least surprise in specifying this behaviour?

But a note in the spec explains further:

Note: The invalid at computed-value time concept exists because variables can’t “fail early” like other syntax errors can, so by the time the user agent realizes a property value is invalid, it’s already thrown away the other cascaded values.

Ah, right! So first of all browsers figure out the cascade and then they evaluate custom properties. If a custom property evaluates to gobbledygook, it’s too late to figure out what the cascade would’ve fallen back to.

Thinking about it, this makes total sense. Remember that CSS custom properties aren’t like Sass variables. They aren’t evaluated once and then set in stone. They’re more like let than const. They can be updated in real time. You can update them from JavaScript too. It’s entirely possible to update CSS custom properties rapidly in response to events like, say, the user scrolling or moving their mouse. If the browser had to recalculate the cascade every time a custom property didn’t evaluate correctly, I imagine it would be an enormous performance bottleneck.

So even though this behaviour surprised me at first, it makes sense on reflection.

I’ve probably done a terrible job explaining the behaviour here, so I’ve made a Codepen. Although that may also do an equally terrible job.

(Thanks to Amber for talking through this with me and encouraging me to blog about it. And thanks to Lea for expanding my mind. Again.)

Friday, May 15th, 2020