Present

  • I saw this on Bluesky recently.

    And I think it gets to the root of what I don’t like about AI. It’s not about how good it is at certain things, or how magical it can feel, or whether it not it can trick people into thinking a human might of been behind something it created. Honestly, I don’t really care. Human beings are incredible, and we are worth far more than machines.

  • I wrote about Bill Gross, and Goto.com once. He’s a fascinating individual that figured out the key to monetizing search years before Google would eventually copy him. He has a certain way of understanding technology as inevitable and rolling along with it, rather than trying to resist it.

    He appears to have made the same determination about AI. I’m not sure I agree that we should give up on the resist part, but if anybody’s going to save at least some semblance of the open web from the onslaught of AI, it may very well be Bill Gross. John Batelle, who wrote the literal book on Google, appears to agree.

    Batelle has taken an interest in Gist.AI, a new startup that grew out of Gross’ startup accelerator and that he is now at the helm at. Gross is approaching the problem of AI with his usual pragmatism, and proposing a solution that focuses on partnerships between publishers and AI search.

    Those ravenous AI bots hoovering up websites at a rate of thousands of crawls a day? They’re shoplifting, Gross says. AI services should pay for the privilege of ransacking the open Internet, he argues. This concept – “pay per crawl” – has already taken root: Internet infrastructure giant CloudFlare has implemented a pay-per-crawl marketplace premised on a similar philosophy. Publishers that aren’t being paid by those data-hungry AI bots can now avail themselves of a free service from CloudFlare that blocks them at the door. 

    Batelle seems to seem think that Gist.AI might give publishers the tools to fight back against the larger AI companies. I’ve actually heard rumblings about Gist in the publisher world, so maybe he’s right. He certainly has been before.

  • Matthew Phillips:

    Furthermore, the nature of engagement itself has been subtly reshaped. Algorithms often favor content that elicits strong, quick reactions – the kind that can be easily signaled with an emoji or a “reaction thumbnail.” Nuanced discussion and thoughtful communication, the traditional hallmarks of blog comment sections and the communities around them, take a backseat to attention-grabbing, often polarizing, content. The algorithm, in its quest for maximum engagement, can inadvertently filter out the very depth and thoughtfulness that blogs once championed.

    Look around at the average blog and you’re about to notice the dearth of comments. It wasn’t long ago that the number of comments on a post was a solid indicator that it struck a chord. As someone who runs a long-running blog, I’ve seen this happen in seemingly real time, and all you have to do is compare an article from 2015 like this to a similarly provocative article from 2025 like this. Reactions are the new currency and they’re happening on some other platform.

    I‘ve toe-dipped into the IndieWeb over on my personal blog because it helps mitigate some of the silence by porting social media interactions into the site so they can be rendered in both places. It’s sorta like a modern-day version of Pingbacks. But it also feels like a FOMO-driven response because the interactions are happening elsewhere and all I’m doing is collecting artifacts. I’m still required to engage outside of my web home and feed the algorithm.

  • The Internet Phonebook is sold out. There should be more copies in stock soon though. It’s a cool idea from Kristoffer Tjalve and Elliott Cost that curates a directory of lovely personal websites into a physical directory and book you can carry around with you. Each site has a phone number that, when dialed through the phonebook’s dial-a-site feature, will direct you to the right place.

    This is paired with some lovely essays that give you a chance to feel the weight of( a corner) of the Internet in the real world. I love any opportunity to bring a caring side of the web out of our screens and out into the world.

    That phone to web connection makes me think of net artist Heath Bunting, who created an online directory of phone numbers for payphones at King’s Cross Station in London. Visitors to the site were encouraged to call around 5PM for maximum effect and to connect with other web citizens that might drift towards the phone at that time.

    The web and the real world are the same thing. I like projects that acknowledge that.

  • Yes, yes, it’s Global Accessibility Awareness Day. While that’s deservedly today’s focal point, it shouldn’t go unnoticed that the W3C published a set of Privacy Principles as well:

    This document is intended to help its audiences address privacy concerns as early as possible in the life cycle of a new web standard or feature, or in the development of web products. Beginning with privacy in mind will help avoid the need to add special cases later to address unforeseen but predictable issues or to build systems that turn out to be unacceptable to users.

    There are 30 principles (and sub-principles) in all. A few choice selections, starting with restricting the sort of data that is transferred around to what’s strictly necessary:

    • Principle 2.2.1Sitesuser agents, and other actors should restrict the data they transfer to what’s either necessary to achieve their users’ goals or aligns with their users’ wishes and interests.

    People have rights when the data is about them:

    This one’s particularly damning to browsers and marketers:

    Principle 2.9.2User agents and sites must take steps to protect their users from abusive behaviour, and abuse mitigation must be considered when designing web platform features.

    And let’s ditch legal jargon when explaining how data is handled:

    Principle 2.11.2: Information about privacy-relevant practices should be provided in both easily accessible plain language form and in machine-readable form.

    How many times have you agreed to or confirmed cookie notices? Wouldn’t it be great to have access to your choices after the fact?

    Principle 2.12.3: It should be as easy for a person to check what consent they have given, to withdraw consent, or to opt out or object, as to give consent.

    Lastly, let’s make sure we don’t punish someone for wanting to protect their privacy:

    Principle 2.14Actors must not retaliate against people who protect their data against non-essential processing or exercise rights over their data.

  • It’s the 13th Global Accessibility Awareness Day today. A good reminder that there are a lot of web folks who care a hell of a lot about this kind of thing. And it’s the little things, in aggregate, that can help us shake off efforts that are hostile to accessible experiences.

  • A great bird’s eye view of the visual programming historical landscape, starting with Visual Basic in 1991 and ending with what is ultimately a push to use Nordcraft’s product in 2025.

    Salma’s actual point, however, is that visual coding apps and platforms have failed to get it “right” even after 30 years of attempts.

    It’s no surprise we weren’t getting it right in 1995, if we 
    still can’t get it right 30 years later with all of this knowledge, experience, and empathy under our belts. And I’m not even going to mention at this point how AI can’t get this right, either. Of course it can’t; it doesn’t possess the capacity for empathy.

    Which, of course, is an indirect response to Figma introducing its own visual site builder, Figma Sites. The public response to Figma Sites has been abysmal because of the inaccessible HTML that the tool generates.

    This week on May 7th 2025 Figma announced Figma Sites, a tool to publish your designs built in Figma directly to the web. But this new product has not been well received. Adrian Roselli warns us: Do not publish your designs on the web with Figma Sites.

    Adrian’s post doesn’t even delve deeply into the accessibility issues produced by Figma Sites. All he needs to do is run simple automated tests to demonstrate just how deep the dumpster fire is.

    It feels relevant to bring up Jakob Neilsen’s recent remarks that AI will completely eliminate accessibility issues:

    Accessibility will disappear as a concern for web design, as disabled users will only use an agent that transforms content and features to their specific needs.

    Will it? Even if it does, perhaps Jony Ive’s warning to designers from Stripe Sessions 2025 this past week:

    Even if you’re innocent in your intention, if you’re involved in something that has poor consequences, you need to own it.

  • There are two events drifting the web back to a place of what feels like peaceful simplicity.

    CSS Day is coming up in June. It’s speaker lineup is packed with some of the most thoughtful CSS practitioners and creators the web has. There’s a lot to be excited about there, but what’s most refreshing is that it’s about CSS. No frameworks, no vibe coding. Just. Simple. CSS.

    They will be talking about building something sustainable and sturdy using CSS as a foundation. They’ll be returning to a core technology that makes the web great and showing what it makes possible in practice.

    Then there’s the jQuery reunion. jQuery left its mark on the history of web development. But it’s important to remember that its genesis began at a time when the web was filled with a lot of potential. And I think it’s fair to say that jQuery helped it find that potential and deliver it to a massive audience (something it’s also fair to say it’s largely still doing on many, many websites).

    It’s a good moment to return to. To turn over and examine what we were all trying to gain and work towards in that moment. I expect we’ll see a lot of that kind of thing at the reunion.

    Hidden in these examinations of the core technologies of the web is a desire to return to a web design industry that was innovating and creating at a rapid clip. The vision of the web was to share, outwardly, information with one another. And returning to simplicity is often what makes that possible. It makes the web broadly accessible. It turns anyone into a web creator.

    Maybe us old timers will keep trying to make things simple. And maybe that’s a good thing actually.

  • W3C Technical Architecture Group:

    Third-party (AKA cross-site) cookies are harmful to the web, and must be removed from the web platform. 

    […]

    We are strongly in favor of innovations to build sustainable business models on the web platform, but an in-depth discussion of the various possibilities are outside of the scope of this document. From an architectural standpoint, web standards should avoid encoding particular business models that are available to authors, publishers, and web content creators.

    Them are some strong words from the W3C that leave no doubt about their opinion to remove third-party cookies from the web. We recently noted that Google is sidestepping COPPA regulations. Something tells me the W3C is publishing this in response to Google dropping its own plans to remove third-party cookies from Chrome. Let the battle begin!

  • Google is planning to allow users under 13 to use their AI product, Gemini. But like so many others in this space, they are giving the game away:

    Like its Workplace for Education accounts, Google says children’s data will not be used to train AI. Still, in the email, Google warns parents that “Gemini can make mistakes,” and kids “may encounter content you don’t want them to see.”

    Our legal frameworks have begun to fall apart in the age of AI. Section 230, for instance, is difficult to apply in an age when there is no personal responsibility. If I prompt AI to make me something, and it generates something illegal, how do we regulate that?

    There have been some strides but a combination of powerful lobbying and technical incoherence in the federal government is slow.

    Google is going to test the limits of COPPA with this one. If you work on the web, you have likely had to make adjustments to sites to ensure compliance with COPPA. It’s a pretty smart law and it protects children using the web from having their data improperly collected. That’s why Google is making this claim. That way we they can say later they tried their best, but AI is just too difficult to control.

    We can’t allow companies to pass their accountability over the machines.