Here's my five latest blog posts - or you can browse a complete archive of all my posts since 2008.

CSS Has 239 Different Ways To Make Something Blue

I’m making a new video course for Dometrain at the moment, and it’s all about CSS - one of the three pillars of the open web, along with HTML and JavaScript. I love CSS, I’ve been working with it literally since it was invented, but I absolutely understand why so many developers don’t enjoy working with it.

CSS has had to incorporate conventions and standards from a wider range of disciplines than any other mainstream technology. Modern CSS incorporates ideas from mechanical typesetting dating back centuries, conventions around information design from hundreds of years of printing and publishing, fifty years of innovations from digital publishing and computer graphics, twenty years of getting stuff wrong on the web while we were still figuring out how to get it right - right up to things like how to account for the “dynamic island” on the latest iPhone handsets.

To say it’s accumulated a few idiosyncrasies along the way would be an understatement. Yesterday, while putting together the module about how the various colours models work in CSS, I found myself wondering how many different ways there are in modern CSS to give a box a blue background.

I got to 239. Two hundred and thirty nine different ways to say “the box is blue”.

You’ve probably heard of named colours (color: blue;), and hex codes (color: #0000ff). Hex codes can also be written as three digits (#00f), four digits (#00ff), and eight digits (#0000ffff).

Then there’s the rgb() function, which can take each colour component as a decimal or as a percentage. There’s hsl(), which takes hue, saturation, and lightness, along with a bunch of new colour models - hwb(), lab(), lch, oklab() and oklch() - all of which also take a hue component.

Except hue in CSS is an angle - the colour’s position on a colour wheel - and CSS already has a unit system in place for angles, that’s used for rotations and transformation, so the colour models just reuse that. Which means you can write the hue component as degrees (with or without deg), radians, gradians (the cursed unit. 400 grads in a circle. Why does it even exist? I don’t know.), or turns. So if any colour specification involves hue, there are five different ways to write it.

Add in two different ways to specify alpha transparency - / 100% and / 1.0 - and you end up with well over two hundred different ways to say “the box is blue”… and this is just using the modern CSS colour syntax; we’re not even getting into the legacy rgb() and hsl() function syntax, their aliases rgba() and hsla(), or any of the wonderful things you can do with calc() and relative colours.

OK, let’s be fair here. If the language designers hadn’t reused CSS angular units for hue, we’d be looking at more like 40, maybe 50 syntax variants. Pick a lane for how you want to specify transparency, you’re down to 20 or so. You’re very unlikely to use the LAB or LCH colour models in production unless you know exactly what you’re doing, so that’ll chop it down further; in reality, you’re not going to encounter more than handful of these variants, and the vast majority of sites out there just use hex codes for everything.

But, y’know, if you’ve got the whole chaotic evil vibe going on, why not spec all your colours as oklch, lightness and chroma as arbitrary decimals, hue in grads, and give everything an alpha channel that’s randomly a percentage or a decimal, even for colours which are fully opaque?

I think you’ll all agree that color: blue; looks kinda dumb, but color: oklch(0.452 0.31 292grad / 100%); is clearly the work of a genius.

Check out the whole list over at https://dylanbeattie.net/miscellany/blues.html

"CSS Has 239 Different Ways To Make Something Blue" was posted by Dylan Beattie on 30 July 2025 • permalink

Naming Things is Hard... Renaming Things is Harder

You know those little things that you just sort of ignore, over over and over again, until finally one day the planets align and you’re like “No! I’m not putting up with this any more! There must be a way to fix it!”

canon-eos-m200

Welcome to my cameras. I have two Canon EOS M200 cameras, which I use for streaming, recording training videos, teaching online workshops, all kinds of stuff. One is black, one is white, they are otherwise identical. They’re both connected to my main PC using Elgato Cam Link 4K HDMI capture devices, which for the remainder of this article I will call camlinks because they don’t have a better generic name.

They’re also connected using micro-USB, which means I can use Canon’s Remote Shooting utility to control things like the F-stop, white balance, and ISO. The cameras themselves are permanently mounted around my desk — one is above my main screen, the other one’s built in to a teleprompter — so it’s kinda hard to fiddle with the settings menu.

The problem is that when I go to fire up the EOS utility, it asks me to pick which camera I’m using… am I using the Canon EOS M200, or the Canon EOS M200? And then there’s the fact that both of the dongles show up in Windows as “Cam Link 4K”, which means setting up video sources in programs like OBS Studio normally involves picking the wrong camera at least once.

Well, this morning I had that fateful nerd thought… “I should fix this. How hard can it be?” Well, grab your razor and strap in, friends… we’re going yak shaving! Now, most of the time, if you want to rename a thing in Windows, you right-click it, and choose “Rename”, and give it a new name.

That doesn’t work in Device Manager. I mean, sure, rename a file, but why would anybody ever need to rename a camera? OK, no big deal. Those names have got to come from somewhere. Probably the registry. So I fire up regedit, Ctrl-F, “Cam Link 4K”, hit “Find Next”:

And I wait, and wait, and wait a bit, and then I get bored because this is a Ryzen 9950X3D with 128Gb of RAM and I didn’t spend that kind of money so I could sit around and wait for things, dammit!

I tried writing a Powershell script to do the same thing — trawl the registry, look at all the keys and subkeys and entries and values — but it did that infuriating thing where it takes so long to not do anything that it wasn’t clear after 30 seconds whether the program worked, but hadn’t matched anything, or I’d screwed up the matching logic.

Ok, much better approach: let’s dump the entire system registry to a text file and then I can edit it properly. File, Export, registry.txt, takes about a second. Done.

I now have a 540Mb text file full of bits of Windows registry. It looks like this:

[HKEY_LOCAL_MACHINE\SYSTEM\DriverDatabase\DriverPackages\termbus.inf_amd64_7ccf415b3c0cf753\Descriptors\TI_COMPAT_DEVICE]
"Configuration"="TS_INPT_DEVICE.NT"
"Manufacturer"="%msft%"
"Description"="%ts_inpt_device.devicedesc%"

[HKEY_LOCAL_MACHINE\SYSTEM\DriverDatabase\DriverPackages\termbus.inf_amd64_7ccf415b3c0cf753\Descriptors\TS_BUS]

[HKEY_LOCAL_MACHINE\SYSTEM\DriverDatabase\DriverPackages\termbus.inf_amd64_7ccf415b3c0cf753\Descriptors\TS_BUS\TS_INPT]
"Configuration"="TS_INPT_BUS.NT"
"Manufacturer"="%msft%"
"Description"="%ts_inpt_bus.devicedesc%"

[HKEY_LOCAL_MACHINE\SYSTEM\DriverDatabase\DriverPackages\termbus.inf_amd64_7ccf415b3c0cf753\Strings]
"msft"="Microsoft"
"ts_inpt_device.devicedesc"="Remote Desktop Input Device"
"ts_inpt_bus.devicedesc"="Remote Desktop Input Bus Enumerator"

[HKEY_LOCAL_MACHINE\SYSTEM\DriverDatabase\DriverPackages\termkbd.inf_amd64_b0e97bc9e5ad246d]
"Version"=hex:ff,ff,09,00,00,00,00,00,6b,e9,36,4d,25,e3,ce,11,bf,c1,08,00,2b,\
  e1,03,18,00,c0,17,1d,14,c2,c9,01,7e,04,f4,65,00,00,0a,00,00,00,00,00,00,00,\
  00,00
"Provider"="Microsoft"
"SignerScore"=dword:0d000003
"FileSize"=hex(b):70,34,01,00,00,00,00,00
"StatusFlags"=dword:00000112
@="termkbd.inf"

Each chunk is in this format:

[HKEY_LOCAL_MACHINE\SYSTEM\Path\To\Registry\Key]
"Name"="String Value"
"AnotherName"=dword:12345678
"ThirdName"=hex(b):aa,bb,cc,dd,ee,ff,11,22

So I want to find every chunk that contains any entry whose value contains "Cam Link 4K" or "Canon EOS M200", and then extract:

  • The first row of that chunk, which is the path to the registry key I need to edit
  • The entry itself

Then — in theory — I’ve got a handful of registry entries in a file, which I can edit in VS Code or something, import it into regedit, and presto! rename all the things.

First idea: pull it into VS Code, replace all the \n with a marker __EOL__ so I get every block on its own line, delete all the lines that don’t contain "Cam Link 4K" or "Canon EOS M200", turn all the __EOL__ back into \n, and then do the rest by hand.

Yeah… VS Code won’t do that.

image-20250726170311811

TextPad wouldn’t do it either…

But it's OK. We solved this problem. We solved it BEFORE I WAS BORN. The beardy wizard people who created Unix knew all about editing files that wouldn't fit in memory, because they built Unix for computers that had an ENTIRE MEGABYTE of RAM... which you had to share with the rest of the university.

— Dylan Beattie (@dylanbeatt.ie) 26 July 2025 at 10:42

Yep, this is a job for awk. You probably don’t know what awk is. Awk is a stream-based editing utility originally created for Unix. So I asked Copilot to write me an awk script. (what, you think I can remember how to write awk? University was a long time ago, friends…)

BEGIN {
	RS = "\n\n";   # Blank lines separate records (blocks)
	ORS = "\n\n";  # Output blocks separated by two newlines
}

{
	if ($0 ~ /Cam Link 4K || $0 ~ /Canon EOS M200/) {
		n = split($0, lines, "\n")
		output = lines[1]
		for (i = 2; i <= n; i++) {
			if (lines[i] ~ /Cam Link 4K/ || lines[i] ~ /Canon EOS M200/) {
				output = output "\n" lines[i]
			}
		}
		print output
	}
}

Yeah, that’s what awk looks like. Remember, this is the language that Larry Wall looked at, went “well, golly,I can do better than that”, and… invented Perl.

Of course, it didn’t work. Me & my little electric copilot buddy had missed two rather significant details… first, the RS record separator? We’re trawling a file created by Windows regedit. It’s using \r\n, not \n. Easy fix; change `RS = “\r?\n\r?\n” `(and look, now it’ll work cross-platform if I ever need to awk the Windows registry on macOS!)

Still doesn’t work… because the Windows Registry Editor’s Export feature creates text files that are encoded as UTF16-LE, and awk don’t do UTF16. So I use VS Code to save the registry file as UTF-8, and off we go…

gawk -f filter.awk registry.txt > devices.txt

It works! devices.txt now has a little registry snippet for every single chunk of registry that includes "Canon EOS M200" or "Cam Link 4K"

"Canon EOS M200" appears 25 times in the registry. "Cam Link 4K" appears 60 times - and the exact entry "FriendlyName"="Cam Link 4K" accounts for 41 of those... crikey. That's a lot of times.

— Dylan Beattie (@dylanbeatt.ie) 26 July 2025 at 11:22

OK, let’s figure out which one goes where.

I am assuming at this point that nothing in Windows is stupid enough to actually open, connect, etc. devices based on a field called FriendlyName. If I’m wrong, things are about to get extremely hilarious indeed.

So I use Textpad’s really handy “sequence replace” feature, that’ll let you use a regex to find something and then include an incrementing sequence number in the replacement expression:

Textpad Search and Replace

The result is a registry file where every FriendlyName value is now unique - so I can in theory reboot, see which names appear in which drop-down menus and dialogs, and then edit them accordingly. In theory.

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Portable Devices\Devices\USB#VID_04A9&PID_32EF#9&39F7FE61&0&3]
"FriendlyName"="Canon EOS M200 UNIQUE12"

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Portable Devices\Devices\USB#VID_04A9&PID_32EF#A&D9BD236&0&3]
"FriendlyName"="Canon EOS M200 UNIQUE13"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{eec5ad98-8080-425f-922a-dabf3de3f69a}\0005]
"FriendlyName"="Canon EOS M200 UNIQUE14"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{eec5ad98-8080-425f-922a-dabf3de3f69a}\0009]
"FriendlyName"="Canon EOS M200 UNIQUE15"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_04A9&PID_32EF\9&39f7fe61&0&3]
"FriendlyName"="Canon EOS M200 UNIQUE38"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_04A9&PID_32EF\a&d9bd236&0&3]
"FriendlyName"="Canon EOS M200 UNIQUE39"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_0FD9&PID_0066&MI_00\a&1b1e3ad0&0&0000]
"FriendlyName"="Cam Link 4K UNIQUE40"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_0FD9&PID_0066&MI_03\a&1b1e3ad0&0&0003]
"FriendlyName"="Cam Link 4K UNIQUE41"

Add the first line to the file otherwise regedit rejects it:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Portable Devices\Devices\USB#VID_04A9&PID_32EF#9&39F7FE61&0&3]
"FriendlyName"="Canon EOS M200 UNIQUE12"

...and so on...

Import…

img

Reboot… IT WORKED!

image-20250726172549272

Well, it worked mostly. At first, the Canon EOS Utility Launcher didn’t pick it up at all, and it turns out some of the Canon EOS utilities read the DeviceDesc value from the registry, not FriendlyName, so I edited that one too… at this point I got into a good half-hour of editing something, rebooting, seeing what had changed, edit again, reboot again, turn it off and on again… and quite a few moments where I thought I’d tried everything I could think of, including a reboot, and it still hadn’t worked…

And then, one final, glorious reboot, and there it was.

image-20250726172814992

image-20250726173048019

And you know what?

I will bet money, good, solid, chunky cash dollar money, that at some point, somebody at Canon said “er… boss, what happens if somebody buys two of the same camera, and has them both plugged in at the same time over USB?” and somebody’s boss said words to the effect of “stop derailing the planning meeting with your stupid edge cases, Chris, we have work to do and you’re not helping.”

I see you, Chris.

I see you, and I salute you. 🫡

Oh, and Microsoft: if we could get right-click, Rename… in the Device Manager? That’d be, like, just swell.

"Naming Things is Hard... Renaming Things is Harder" was posted by Dylan Beattie on 26 July 2025 • permalink

The Subtle Art Of Deprecating API Endpoints

I had an app fail in production the other day. Not seriously - only affected a couple of admin screens - but it failed because Hubspot had deprecated some of their API endpoints. (That’s nerd speak for “we were using a thing and they turned it off.”)

https://developers.hubspot.com/changelog/breaking-change-removed-support-for-referencing-custom-object-types-by-base-name

Sure, they announced in October 2024 that this particular endpoint was being deprecated. Only problem is… I didn’t see the announcement, because I didn’t even start working on this integration until April 2025 - in fact, I’d never worked with Hubspot’s API at all prior to April 2025. I followed their docs, built the integration points I needed, tested it all… and I somehow managed to build and test an integration against an endpoint which was already scheduled for deprecation, without ever having the faintest clue.

Wouldn’t it be nice if, the day they decide something’s going to get switched off, that feature was no longer available to any new customers? Sure, the folks who were already using it before the announcement; makes sense to give them six months or whatever to update their code. But seems a bit odd to me that they’d offer completely new integrations access to a feature they already know is going to shut down soon.

..then again, maybe I’m just salty ‘cos I don’t like it when other people break my stuff. I guess the lesson is to always assume that every single API request you make might randomly start returning an HTTP 400, at any point, for no good reason, and engineer around that as best you can. Fallback caching actually kept the thing running for at least a week after the endpoint in question was deprecated, which I’m kinda happy about - but I could also have wired it to actually tell somebody if it had been running on cached data for more than 24 hours.

"The Subtle Art Of Deprecating API Endpoints" was posted by Dylan Beattie on 16 July 2025 • permalink

We Miss You, Tommy Vance

In 1990, BBC Radio 1 broadcast the “Monsters of Rock” festival live from Donington Park. Thunder, Quireboys, Poison, Aerosmith, Whitesnake - and before, between, and after the bands, the broadcast featured live commentary and interviews from Tommy Vance, Mick Wall roaming the backstage area talking to the stars who were in attendance, and Richard Skinner’s 4-part history of the Monsters of Rock. Free to air. I know this, ‘cos I recorded the whole thing onto a stack of TDK D90 tapes and listened to it on repeat for the better part of the next decade.

Today, five million people paid £25 each to watch the live stream of Ozzy Osbourne’s farewell show with Black Sabbath, broadcast live from Villa Park in Birmingham. That’s over a hundred million pounds in revenue, just from the live stream… so, of course, between the acts, the stream cut to a professional studio, where experienced presenters talked about the history and significance of the show, interviewed the stars who were performing live… no, of course it didn’t. Instead, we got 15-second cameraphone clips of Sabbath fans all over the world talking about how much they loved the band, interspersed with fundraiser clips from the various charities the day was nominally supporting.

The coverage of the live acts was impeccable, but the interludes made the whole thing feel like one of those “live in my living room” streams everybody was doing back in 2020, and… well, I can’t help thinking that if Tommy Vance was still around, he wouldn’t have let that happen on his watch. But he’s not, and I didn’t realise until today that maybe when TV died, the best part of rock’n’roll broadcasting died with him.

"We Miss You, Tommy Vance" was posted by Dylan Beattie on 06 July 2025 • permalink

Could HTTP 402 be the Future of the Web?

When Tim Berners-Lee first created HTTP, way back in the early 1990s, he included a whole bunch of HTTP status codes that seemed like they might turn out to be useful. The one everybody knows is 404 Not Found - because as human being, browsing around the web, you see that one a lot. Some of them are basically invisible to humans: every time you load a web page, there’s actually an HTTP 200 OK happening behind the scenes, but you don’t see that unless you go looking for it. Same with various kinds of redirects - if the server gives back an HTTP 301, it’s saying “hey, the page you wanted has moved permanently; here’s the new address”; a 307 Temporary Redirect is saying “that’s over there right now but it might be back here later so don’t update your bookmarks”, and a 304 is saying to your browser “hey, the version of the page that’s in your cache is still good; just use that one”.

If, like me, you’ve spent a lot of time building web apps and APIs, you’ve probably spent hours of your life poring over the HTTP specifications looking for the best response code for a particular situation… like if somebody asks for something which they just deleted, should you return a 404 Not Found, or a HTTP 410 Gone - “hey, that USED to be here but it’s gone and it isn’t coming back”

And along the way, you’ve probably noticed this status code: HTTP 402 Payment Required. This code has been there since the dawn of the world wide web, but, to quote the Mozilla Developer Network: “the initial purpose of this code was for digital payment systems, however this status code is rarely used and no standard convention exists”.

Well, that might be about to change. Earlier this week, the folks over at Cloudflare announced they’re introducing something called “pay per crawl” - enabling content owners to charge AI crawlers for access.

Let’s put that in context for a second. For the vast majority of commercial websites out there, their ability to make money is directly linked to how many humans are browsing their site. Advertising, engagement, growth, subscribers - however you want to slice it, if you want to monetize the web, you probably want humans looking at your content. It’s not a great model - in fact, it’s mostly turned out to be pretty horrible in all kinds of ways - but, despite multiple valiant attempts to develop better ways to pay creators for content using some sort of microtransactions, advertising is the only one that’s really stood the test of time.

Then AI comes along. By which, in this context, I specifically mean language models trained on publicly available web data. I’ve published a transcript of this video on my blog. If somebody comes along in a couple of weeks and Googles “dylan beattie cloudflare pay per crawl”, Google could provide an AI-generated summary of this article. Or, like a lot of folks out there, that person might skip Google completely and just ask ChatGPT what Dylan Beattie thinks about pay per crawl - and they’ll get a nice, friendly - and maybe even accurate - summary of my post, and so that person never makes it as far as my website.

That is a tectonic shift in the way commercial web publishing works. For nearly thirty years, search engines have been primarily about driving traffic to websites; the entire field of SEO - search engine optimisation - is about how to engineer your own sites and your own content to make it more appealing to engines like Google and Bing. Now, we’re looking at a shift to a model where AI tools crawl your site, slurp up all your content, and endlessly regurgitate it for the edification of their users - and so the connection between the person who’s writing the thing, and the person who’s reading the thing, is completely lost.

For folks like me, who enjoy writing for humans, that’s just sad. For writers who rely on human web traffic to earn their living, it’s catastrophic… and that’s what makes Cloudflare’s proposal so interesting: they’re proposing a way to charge crawlers to read your content.

Now, websites have historically relied on something called a robots.txt file to control what search engines can and can’t see… but it’s advisory. The web was designed to be open. Robots.txt is like leaving all your doors and windows wide open and putting a sign on the lawn saying “NO BURGLARS ALLOWED”, and it’s just one of the many, many ways in which the architects of the web were… let’s say optimistically naïve about how humans actually behave. Which is maybe understandable, given that CERN, the nuclear research centre on the French/Swiss border where Tim Berners-Lee invented the World Wide Web, isn’t renowned for being a hotbed of unscrupulous capitalism.

So you had a choice: you make your content wide open and ask the robots to play nice, or lock it away behind a paywall so that only paying subscribers can see it. Which, of course, means the robots can’t see it, so your site never shows up in Google, so we invented all kinds of clever ways to create content that was accessible to search engines but asked human visitors to register, or sign in, or create an account…

Now, in theory, Cloudflare’s proposal is pretty simple. If your website is hosted behind a Cloudflare proxy - and according to Backlinko, of the world’s ten thousand busiest websites, 43% of them use Cloudflare, so that’s a LOT of websites - then when an AI crawler comes asking for content, you can reply with an HTTP 402 Payment Required, and give them a price - and if the crawler wants to pay, they try again, and include a crawler-exact-price header indicating “yes, I will pay that much” - this is reactive negotiation. Alternatively, there’s proactive negotiation, where the crawler says “hey, I’ll give you five bucks for that web page” and Cloudflare says “Yeah, sure!” - or if you’ve told your site that page costs ten bucks, the crawler gets an HTTP 402 Payment Required, and they’re free to make a better offer or go crawl somewhere else.

Incidentally, folks, I’m anthropomorphising here. Crawlers are software. They don’t ask, they don’t want to pay, they don’t agree. Even “crawling” is a metaphor. We’re talking about configuration settings and binary logic in a piece of software that makes network requests. Sure, it makes a better story if you’ve got this picture in your head of some sort of creepy-crawly insect knocking on doors and haggling over prices, but it is just code. Don’t get too attached to the metaphor.

Anyway. That’s the model. Sounds simple, apart from two tiny little details… how do you know the thing making the requests is an AI crawler, and who handles the money?

The first part is being done using a proposal called Web Bot Auth, based on two drafts from the Internet Engineering Task Force, the IETF: one, known as the directory draft, is a mechanism for allowing crawlers and web bots to publish a cryptographic key that websites can use to authenticate those bots, and the second, the protocol draft, is a mechanism for using those keys to validate individual HTTP requests.

That way, anybody running a web crawler can create a cryptographic key pair and register that key pair with Cloudflare - “hey, we’re legit, our crawler will identify itself using THIS key, here’s the URL where you can validate the key, and by the way, here’s where you send the bill”.

And that’s the second part: Cloudflare is proposing to aggregate all of those requests, charge the crawler account, and distribute the revenue to the website owners whose content is being crawled. Cloudflare acts as what’s called the Merchant of Record for those transactions, which should make it all much more straightforward when it comes to things like taxation.

Let’s be realistic here. The technical bits of this are not that complicated. They’re built using existing web standards and protocols. The financial elements of the proposal are far more complex, but this is Cloudflare, a company that probably understands the relationship between internet traffic, billing and international revenue models better than anybody.

There’s one big question that isn’t addressed in their post: what stops AI bots just pretending to be regular humans using regular browsers, and bypassing the whole pay per crawl thing? I’m guessing that, this being Cloudflare, AI bot detection is one of the things they’re quite good at… but publishers also have the option now of putting everything behind some kind of paywall; humans have to sign in, and bots have to validate. There’s also no indication as to what sort of amounts they have in mind - beyond the fact their examples are in US dollars, as opposed to, say, micros, which are a standard currency unit in Google’s payment API that’s worth one millionth of a US dollar. But I guess capitalism will figure that out.

Folks, I have to be honest. I’ve been working on the web since it was invented, and this is the first thing I’ve seen in a long, long time that is genuinely exciting. Not necessarily at face value - I don’t care that much about Cloudflare making AI bots pay to crawl websites. No, what’s exciting is that if Cloudflare goes all-in on this, this could be a big step towards a standard model, and a set of protocols, for monetising automated access to online content - even if neither Cloudflare nor AI is involved.

Imagine a decentralised music streaming service, where the artists host their own media and playback apps negotiate access to that media via a central broker that validates the app requests and distributes the revenue. Playback costs ten cents; if an AI wants to ingest, mash up and remix your music? Fifty bucks. Or a local news sites that can actually make money out of covering local news… how much would you pay to know what’s actually going on with all those sirens and smoke down the end of the High Street, from an experienced reporter who is actually on the scene asking questions, as opposed to somebody in an office recycling stuff they read on social media?

And the fact that the proposal is based around 402 Payment Required, something that’s been part of the web since the days before Google, Facebook, something that’s older than Netscape and Internet Explorer? That just makes me happy. It reminds me of the web back in the 1990s, when the protocols and proposals were all still new, and exciting, and it seemed like there was no limit to what we’d be able to build with them. And yeah, perhaps I’m being overly optimistic… but y’know, looking around at the state of the world, and the web, these days, maybe we could all use a little optimism.

"Could HTTP 402 be the Future of the Web?" was posted by Dylan Beattie on 04 July 2025 • permalink