Here's my five latest blog posts - or you can browse a complete archive of all my posts since 2008.

Could HTTP 402 be the Future of the Web?

When Tim Berners-Lee first created HTTP, way back in the early 1990s, he included a whole bunch of HTTP status codes that seemed like they might turn out to be useful. The one everybody knows is 404 Not Found - because as human being, browsing around the web, you see that one a lot. Some of them are basically invisible to humans: every time you load a web page, there’s actually an HTTP 200 OK happening behind the scenes, but you don’t see that unless you go looking for it. Same with various kinds of redirects - if the server gives back an HTTP 301, it’s saying “hey, the page you wanted has moved permanently; here’s the new address”; a 307 Temporary Redirect is saying “that’s over there right now but it might be back here later so don’t update your bookmarks”, and a 304 is saying to your browser “hey, the version of the page that’s in your cache is still good; just use that one”.

If, like me, you’ve spent a lot of time building web apps and APIs, you’ve probably spent hours of your life poring over the HTTP specifications looking for the best response code for a particular situation… like if somebody asks for something which they just deleted, should you return a 404 Not Found, or a HTTP 410 Gone - “hey, that USED to be here but it’s gone and it isn’t coming back”

And along the way, you’ve probably noticed this status code: HTTP 402 Payment Required. This code has been there since the dawn of the world wide web, but, to quote the Mozilla Developer Network: “the initial purpose of this code was for digital payment systems, however this status code is rarely used and no standard convention exists”.

Well, that might be about to change. Earlier this week, the folks over at Cloudflare announced they’re introducing something called “pay per crawl” - enabling content owners to charge AI crawlers for access.

Let’s put that in context for a second. For the vast majority of commercial websites out there, their ability to make money is directly linked to how many humans are browsing their site. Advertising, engagement, growth, subscribers - however you want to slice it, if you want to monetize the web, you probably want humans looking at your content. It’s not a great model - in fact, it’s mostly turned out to be pretty horrible in all kinds of ways - but, despite multiple valiant attempts to develop better ways to pay creators for content using some sort of microtransactions, advertising is the only one that’s really stood the test of time.

Then AI comes along. By which, in this context, I specifically mean language models trained on publicly available web data. I’ve published a transcript of this video on my blog. If somebody comes along in a couple of weeks and Googles “dylan beattie cloudflare pay per crawl”, Google could provide an AI-generated summary of this article. Or, like a lot of folks out there, that person might skip Google completely and just ask ChatGPT what Dylan Beattie thinks about pay per crawl - and they’ll get a nice, friendly - and maybe even accurate - summary of my post, and so that person never makes it as far as my website.

That is a tectonic shift in the way commercial web publishing works. For nearly thirty years, search engines have been primarily about driving traffic to websites; the entire field of SEO - search engine optimisation - is about how to engineer your own sites and your own content to make it more appealing to engines like Google and Bing. Now, we’re looking at a shift to a model where AI tools crawl your site, slurp up all your content, and endlessly regurgitate it for the edification of their users - and so the connection between the person who’s writing the thing, and the person who’s reading the thing, is completely lost.

For folks like me, who enjoy writing for humans, that’s just sad. For writers who rely on human web traffic to earn their living, it’s catastrophic… and that’s what makes Cloudflare’s proposal so interesting: they’re proposing a way to charge crawlers to read your content.

Now, websites have historically relied on something called a robots.txt file to control what search engines can and can’t see… but it’s advisory. The web was designed to be open. Robots.txt is like leaving all your doors and windows wide open and putting a sign on the lawn saying “NO BURGLARS ALLOWED”, and it’s just one of the many, many ways in which the architects of the web were… let’s say optimistically naïve about how humans actually behave. Which is maybe understandable, given that CERN, the nuclear research centre on the French/Swiss border where Tim Berners-Lee invented the World Wide Web, isn’t renowned for being a hotbed of unscrupulous capitalism.

So you had a choice: you make your content wide open and ask the robots to play nice, or lock it away behind a paywall so that only paying subscribers can see it. Which, of course, means the robots can’t see it, so your site never shows up in Google, so we invented all kinds of clever ways to create content that was accessible to search engines but asked human visitors to register, or sign in, or create an account…

Now, in theory, Cloudflare’s proposal is pretty simple. If your website is hosted behind a Cloudflare proxy - and according to Backlinko, of the world’s ten thousand busiest websites, 43% of them use Cloudflare, so that’s a LOT of websites - then when an AI crawler comes asking for content, you can reply with an HTTP 402 Payment Required, and give them a price - and if the crawler wants to pay, they try again, and include a crawler-exact-price header indicating “yes, I will pay that much” - this is reactive negotiation. Alternatively, there’s proactive negotiation, where the crawler says “hey, I’ll give you five bucks for that web page” and Cloudflare says “Yeah, sure!” - or if you’ve told your site that page costs ten bucks, the crawler gets an HTTP 402 Payment Required, and they’re free to make a better offer or go crawl somewhere else.

Incidentally, folks, I’m anthropomorphising here. Crawlers are software. They don’t ask, they don’t want to pay, they don’t agree. Even “crawling” is a metaphor. We’re talking about configuration settings and binary logic in a piece of software that makes network requests. Sure, it makes a better story if you’ve got this picture in your head of some sort of creepy-crawly insect knocking on doors and haggling over prices, but it is just code. Don’t get too attached to the metaphor.

Anyway. That’s the model. Sounds simple, apart from two tiny little details… how do you know the thing making the requests is an AI crawler, and who handles the money?

The first part is being done using a proposal called Web Bot Auth, based on two drafts from the Internet Engineering Task Force, the IETF: one, known as the directory draft, is a mechanism for allowing crawlers and web bots to publish a cryptographic key that websites can use to authenticate those bots, and the second, the protocol draft, is a mechanism for using those keys to validate individual HTTP requests.

That way, anybody running a web crawler can create a cryptographic key pair and register that key pair with Cloudflare - “hey, we’re legit, our crawler will identify itself using THIS key, here’s the URL where you can validate the key, and by the way, here’s where you send the bill”.

And that’s the second part: Cloudflare is proposing to aggregate all of those requests, charge the crawler account, and distribute the revenue to the website owners whose content is being crawled. Cloudflare acts as what’s called the Merchant of Record for those transactions, which should make it all much more straightforward when it comes to things like taxation.

Let’s be realistic here. The technical bits of this are not that complicated. They’re built using existing web standards and protocols. The financial elements of the proposal are far more complex, but this is Cloudflare, a company that probably understands the relationship between internet traffic, billing and international revenue models better than anybody.

There’s one big question that isn’t addressed in their post: what stops AI bots just pretending to be regular humans using regular browsers, and bypassing the whole pay per crawl thing? I’m guessing that, this being Cloudflare, AI bot detection is one of the things they’re quite good at… but publishers also have the option now of putting everything behind some kind of paywall; humans have to sign in, and bots have to validate. There’s also no indication as to what sort of amounts they have in mind - beyond the fact their examples are in US dollars, as opposed to, say, micros, which are a standard currency unit in Google’s payment API that’s worth one millionth of a US dollar. But I guess capitalism will figure that out.

Folks, I have to be honest. I’ve been working on the web since it was invented, and this is the first thing I’ve seen in a long, long time that is genuinely exciting. Not necessarily at face value - I don’t care that much about Cloudflare making AI bots pay to crawl websites. No, what’s exciting is that if Cloudflare goes all-in on this, this could be a big step towards a standard model, and a set of protocols, for monetising automated access to online content - even if neither Cloudflare nor AI is involved.

Imagine a decentralised music streaming service, where the artists host their own media and playback apps negotiate access to that media via a central broker that validates the app requests and distributes the revenue. Playback costs ten cents; if an AI wants to ingest, mash up and remix your music? Fifty bucks. Or a local news sites that can actually make money out of covering local news… how much would you pay to know what’s actually going on with all those sirens and smoke down the end of the High Street, from an experienced reporter who is actually on the scene asking questions, as opposed to somebody in an office recycling stuff they read on social media?

And the fact that the proposal is based around 402 Payment Required, something that’s been part of the web since the days before Google, Facebook, something that’s older than Netscape and Internet Explorer? That just makes me happy. It reminds me of the web back in the 1990s, when the protocols and proposals were all still new, and exciting, and it seemed like there was no limit to what we’d be able to build with them. And yeah, perhaps I’m being overly optimistic… but y’know, looking around at the state of the world, and the web, these days, maybe we could all use a little optimism.

"Could HTTP 402 be the Future of the Web?" was posted by Dylan Beattie on 04 July 2025 • permalink

The Linebreakers Video Macro

Anybody who’s seen the Linebreakers live knows that backing videos are a vital part of the show - partly ‘cos that’s where all the drums come from, ‘cos it’s way easier taking a Macbook on a plane than travelling with a drum kit, and partly ‘cos having all the lyrics up on screen makes it much easier for the audience to keep up with the jokes, and there are a lot of jokes.

Those backing videos are all played from PowerPoint. It works, it’s reliable, and having the entire show inside a single 5Gb PPTX file means it’s trivial to run it from a spare laptop if something goes wrong. Every slide contains a single video clip, most of which start and end with a fade from/to black - so at the end of each song I’m staring at a laptop screen showing me that the last song is finished, and the next song is… black screen. So I rely on the speaker notes area to show me which song is on each slide, and updating this by hand is incredibly tedious and error-prone… so I wrote a macro for it. It’ll scan every slide in the deck, look for the embedded video clips, and replace the slide notes with the clip name (which is the filename) of the current and the next slide.

In Visual Basic for Applications, no less.

Sub UpdateSlideNotes()
  For Each Slide In ActivePresentation.Slides
    ThisName = "(no video)"
    For Each Shape In Slide.shapes
      If Shape.Type = msoMedia And Shape.MediaType = ppMediaTypeMovie Then
        ThisName = Shape.Name
      End If
    Next
    Slide.NotesPage.Shapes(2).TextFrame.TextRange.Text = "THIS: " & ThisName
    If Not LastSlide Is Nothing Then
        Set Notes = LastSlide.NotesPage.shapes(2).TextFrame.TextRange
        Notes.Text = Notes.Text & vbNewLine & vbNewLine & "NEXT: " & ThisName
    End If
    If (ThisName <> "") Then Set LastSlide = Slide
  Next Slide
End Sub
"The Linebreakers Video Macro" was posted by Dylan Beattie on 25 June 2025 • permalink

What The Fork Is Going On?

I got two emails recently that have weighed heavily on my mind.

One was from a restaurant booking service called The Fork. On Saturday afternoon, I had used their service to try to book a table for dinner at Garlic & Shots, a legendary rock’n’roll bar in Stockholm. I tried calling them first: no answer, but hey, they’re probably just not open yet, and there’s a link on their website to reserve a table. 10 people, 7pm, email, phone number, confirm… “your reservation is pending”.

I hate “pending”. You can’t do anything with pending. “Hey, is there a dinner plan?” “Yeah. Or maybe no. A reservation is pending.” I suspect even Schrödinger’s cat would baulk at dinner being in some sort of unresolved quantum state.

OK, the place doesn’t open until 5pm, maybe they’ll confirm it then. 5pm comes and goes, no confirmation. I give them a call. A recorded voice says “Welcome!” - in Swedish - and then disconnects the call. This happens three times. There is no option in The Fork app to talk to a human, make a phone call, or anything of that nature. Finally I figure it’s not that far away, I get on the metro, head over, and talk to an actual human. They’ve been very busy: Iron Maiden has just played two arena shows in Stockholm, the city has been wall-to-wall metal fans for three days, and they’ve basically drunk all the beer in every rock bar in town - but no problem; they can take us for dinner. Might be 14 people, might be 8pm rather than 7pm… no problem at all. We have a delightful evening there. So. Much. Garlic.

Sunday 15th June, around 11am, I’m on the train from Stockholm to Helsingborg, and I get an email from The Fork confirming my restaurant booking: 10 people, 7pm, on Saturday 14th.

I do not know what kind of mind is required to develop, test, launch, and support a service that will confirm a restaurant reservation the day after the meal has taken place. I also strongly suspect that the proprietors of Garlic & Shots couldn’t honestly give half a microbollocks what The Fork is doing, but at some point some smooth-talking sales person probably said “no, it’s fine, we’ll handle all your reservations for you” and they just went “ok, whatever, now buy a drink or get out of my bar” and now it’s just a thing.

The next email was from His Majesty’s Revenue and Customs, informing me that there was an important message in my online tax account. They cannot, of course, tell me what the message says, or what it’s about, or even include a link directly to it. “Security reasons”.

An “important message” could actually be important. Not important like “there has been an update to the Netflix Terms & Conditions” important. Important like “you owe us money; pay up or go to jail” important.

So I open a browser, go to HMRC’s website, sign in with my Unique Taxpayer Reference, go through the multifactor authentication, go into my tax account, and sure enough, there’s an important message. I have a new tax statement. OK, cool. What does it say?

Oh, no, we can’t tell you that:

Your new Self Assessment statement has been prepared. You’ll be able to view it online within 4 working days.

So there you go. Two emails. One confirming a restaurant reservation that already happened two days earlier, the other telling me that I need to sign in to a website to see another message telling me that the thing they actually want to tell me isn’t ready yet but I should keep checking back because it’ll be there within four working days.

All you folks out there who think that coding is the hard part, and AI is going to revolutionise software development because it’ll replace the slow, unreliable human programmers with artificial intelligence that cranks out millions of lines of bug-free code in seconds? No. These emails didn’t get sent at the wrong time because of bugs in the code, or because scheduling email delivery is a hard problem and the developers couldn’t quite get it working in time. No, sending email to the right person, at the right time, is a solved problem. It’s been solved for decades. But choosing what that right time is? Identifying the situation where maybe asynchronous communication and pending reservations isn’t actually the best way to solve a problem? No, those require actual human intelligence, which is rapidly closing in on astatine’s long-held status as the rarest naturally occurring element on planet Earth.

I got two stupid pointless emails because somewhere behind them there are stupid pointless humans making stupid pointless decisions. Not programmers. In fact, most of the programmers I know would take one look at the ticket asking them to send an email saying “hey, your new tax statement is ready but you can’t read it yet” and go straight back to the product owner saying “um… how about we wait until the statement is actually available and then send the email?”

On the other hand, any startups out there using AI to replace their product owners and keeping all their human developers? Give me a call, I’ll come work for you. It would be lovely to be able to provide a bit of additional context and get a feature request instantly updated to something more sensible without causing a major political incident in the process.

Oh, and if you don’t know how to make sure email gets sent to the right person at the right time, I do that too. I made an entire course about it: Sending Email with .NET: From Zero to Hero, it’s on Dometrain, and they’re having a summer sale right now so it’s 30% off.

"What The Fork Is Going On?" was posted by Dylan Beattie on 16 June 2025 • permalink

On The Fungibility of Trains

When’s a train not a train?

I’m on my way from Antwerp to Budapest, via Amsterdam Schipol airport, on the delightfully fast and comfortable service that’s now called Eurostar but is still quite clearly a Thalys train with “Eurostar” painted on it.

Except I’m on the wrong train. I have a ticket - and a seat reservation - for the 13:30 departure to Schipol, so when a Eurostar train headed for Schipol pulled into platform 22 at Antwerp at 13:27, I went to board it. Except… no. This isn’t that train. This is train 9327. My ticket is for train 9333. There’s a Eurostar to Schipol every hour… and this one is now running exactly one hour late.

So I smiled very politely and asked if, since I had a ticket for the train that was leaving at 13:30 and I had a plane to catch, could I possibly use that ticket to board the train that was actually leaving at 13:30, and the train conductor agreed that this would probably be alright (and found me an empty seat - go Eurostar!)

The vast majority of trains in this part of the world, you don’t reserve a seat: you just buy a ticket to your destination, and it’s valid on every train headed that way. Whereas my flight to Budapest, the ticket is very clearly for a specific seat, on a specific flight, and even if by some bizarre coincidence there are delays and a different KLM flight leaves Schipol for Budapest at the exact time mine was supposed to leave, it wouldn’t occur to me to get on the wrong plane because it’s in the right place at the right time. Not without changing my ticket, anyway.

Which means Eurostar is in an interesting grey area… because although they run trains on the same rail network as EuroCity and Intercity, your Eurostar ticket is good for one seat on one train, identified by a four-digit train number… and, as I’ve learned today, the train that leaves Antwerp at 13:30 for Schipol Airport may not, in fact, be the 13:30 train to Schipol.

Next stop Budapest and CraftConf, which is going to be awesome because not only have they put together a fantastic line-up of speakers and sessions, but the conference is in a railway museum. (See? It’s all about trains today!) And then home. For a week-and-a-bit before heading to Stockholm for DevSum. I’d put hyperlinks in to all of these but I’m on my phone and you know how to Google.

"On The Fungibility of Trains" was posted by Dylan Beattie on 28 May 2025 • permalink

Sending Email with .NET: From Zero to Hero

Last month I published my first video course on Dometrain, “Sending Email with .NET: From Zero to Hero”: I just got this review from somebody who completed the course:

“⭐⭐⭐⭐⭐ Fantastic job! When I saw the course was over 9 hours, I wondered what you would talk about for that long. Now my head is swimming and it feels like 18 hours got packed into a 9 hour course. I didn’t realize how much I didn’t know about email. Thanks for putting this together.”

I didn’t set out to make a 9-hour course about sending email with .NET - I figured the course would run to 3, maybe 4 hours. But once I’d covered all the underlying standards like SMTP and MIME, modern .NET libraries like MailKit and MimeKit, tools like Mailpit, Mailtrap, ngrok, the chaos of HTML email standards and the tools like MJML that exist to work with it, how to set up all the various DNS records, SPF, DKIM, DMARC, troubleshooting, logging, sending mail from background services… there’s a lot to talk about.

The course is “Sending Email with .NET: From Zero to Hero”, you can buy it at dometrain.com, and it’s 40% off at the moment. Check it out. I think it’s awesome.

"Sending Email with .NET: From Zero to Hero" was posted by Dylan Beattie on 14 May 2025 • permalink