Here's my five latest blog posts - or you can browse a complete archive of all my posts
since 2008.
Don't Reinvent The Wheel: Use What Works
Posted by Dylan Beattie on 09 April 2026
•
permalink
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
When Jeff Goldblum’s rock star mathematician - sorry, chaotician - spoke those immortal lines in Jurassic Park, none of us had any idea how the craft of software development was going to unfold over the next few decades. It was 1994. The world wide web was a handful of academic websites scattered across university servers, Jeff Bezos was on Usenet looking for C++ developers who knew HTML, corporate software was COBOL terminals or Visual Basic on Windows 3.1. There was plenty of free software around, if you knew what to do with a tarball and a makefile, and a few intrepid early adopters were running GNU operating systems built around Linus Torvalds’ Linux kernel, but the term “open source” didn’t exist yet. There was no Java, no .NET, no Python, no JavaScript, no cloud, no AI.
Three decades later, it turns out Dr Malcolm wasn’t just talking about cloning dinosaurs – not unless the dinosaurs in question are authentication frameworks, object mappers, message queues, and cloning them is building your own version ‘cos corporate won’t pay for a license – but look back at your own career, your own projects, and ask yourself: how many times have you built something because you could, without stopping to think whether you should?
I’ve been working as a professional developer for almost as long as Jurassic Park’s been around. I learned HTML in 1992, I started building data-driven web apps in 1996 - classic ASP, VBScript, ADO DLLs and Microsoft Access databases - and I’ve been solving problems with code ever since.
At least, I honestly believed I was solving problems… turns out that sometimes, I was creating more problems than I was solving. Not immediately, of course; me & my teams were cranking out useful features, keeping customers happy, and generating revenue. We liked it. Customers liked it. The Business liked it. One of the thing The Business really liked was when we’d evaluate some expensive library, or package, or software-as-a-service platform, and go “no, that’s way too expensive. We can build our own” - and we did. I’ve probably built half-a-dozen homebrewed customer databases for companies that didn’t want to pay for CRM. I’ve built email clients, marketing tools, content management systems, object-relational mappers, authentication, authorization… one time I even figured out how to use a SQL Server database as a message queue, ‘cos hey, we were already paying for SQL, right? Might as well use it! (The secret is WITH READPAST, if you’re curious.)
Folks, when I talk about “me & my team” here, this is definitely one of those “share the credit, accept the blame” situations. On just about every team I’ve worked with, I’ve been the one making the decisions. I’ve been lucky enough to work with a lot of really smart, capable developers, who built the thing right, and on the occasions it turned we hadn’t built the right thing? That’s usually been on me.
Now, I’m not going to lie. I learned a lot building all those things, it gave me a deep appreciation for the complexities - and the difference between building something that “works on my machine”, and something that’ll work in production across a dozen nodes in a server farm, 24/7, for years at a time. But turns out that homebrewed everything really isn’t a great idea when it comes to what they call TCO - total cost of ownership.
In most cases, the initial build took a couple of weeks: we’d crank something out, get it into production, and move on to the next thing. But then, somewhere down the line, the thing we built would break. Or it’d start to creak under the volume of traffic, concurrency issues, deadlocks. Or we’d just need to add new features - not cool, innovative features we could charge more money for. Boring features. New file formats. Unicode support for localization - stuff that “the business” obviously thought should have been there from the beginning, even though nobody had ever asked for it.
And, because all the software was ours, the only people in the world who knew how to fix bugs and add features were… us. We couldn’t just install the latest version, or upgrade, or outsource. Sure, we hired more developers - but it’d take them months to get up to speed on our codebase’s quirks and idiosyncrasies.
Imagine if we’d built those apps around established open source components and libraries. Senior developers and contractors could have hit the ground running, and be pushing new features to production way sooner. Baseline features like file formats and Unicode support wouldn’t be down to our team - or if they were, the developer who implemented it could submit those changes to the upstream project, which benefits the entire community and looks pretty good on their CV when it’s time to move on to the next thing. And junior developers aren’t spending their time learning homebrewed abstraction patterns and in-house workflows; they’re learning skills, patterns and practises that will stand them in good stead throughout their career.
Over many, many years, I came to realise something: every release, every line of code, every day that my team and I spent working on stuff, should be building things we can sell. Call them what you want: strategic differentiators, special sauce… the stuff our customers can’t get anywhere else. Customers aren’t dealing with us ‘cos we’ve got a fantastic login system, or a really cool message bus. They’re dealing with us ‘cos we’re the best place to go to find acting jobs, or conference venues, or machine tools. And if you want your dev teams focused on the special sauce, everything else has to be as ordinary, as predictable, as boring as possible. You need usernames, passwords, identity management? That’s a solved problem. You need to synchronise data across multiple regions and time zones? That’s a solved problem. You want resilient messaging? That’s a solved problem.
For me, that was the first milestone on the road to engineering maturity. The realization that, no matter how fun and interesting it’ll be to build my own, it’s probably the wrong answer; that my time and energy will be better invested in learning how to integrate and deploy an established solution that solves the problem. It helped that a lot of the time, that solution was free. Free as in free beer, free as in free speech - grab the code, install the package, configure it to do what we need, and get back to building stuff.
And so, over time, we switched from building our own data access layers to using NHibernate, Linq-to-SQL (hey, it was a long time ago!), and Entity Framework. We replaced our SQL-based message queues with EasyNetQ and NServiceBus. We moved from hosting on-premise, to a private cloud, to Amazon Web Services. It wasn’t always plain sailing, but over time we ended up spending a lot less time reinventing common infrastructure, and a lot more adding features our customers wanted.
The second milestone for me was the realisation that, sometimes, the best engineering solution isn’t going to be free. You know that feeling? When you click the “Pricing” tab and you think “oh, boy, that’s a lot… I’m going to have to get approval for this” Most of us didn’t become software developers because we wanted to sit in budget meetings; we became software developers because we wanted to, y’know, develop software. But, with the benefit of a great deal of hindsight, on almost every occasion that we hacked something together rather than spending money? Yeah. We should have done the analysis, created the business case, and spent the money.
There’s a tendency to think of developer time as free - after all, your dev team is going to get paid anyway; it’s not going to cost you any extra to have them build an in-house login system. That’s completely the wrong way to look at it. It’s not about build vs buy. It’s about what they could be building instead - and how much you’ll be able to sell it for. You can spend two months building a login system - or you can buy a login system that works and get the dev team working on the features for the new Platinum membership tier, ship that two months earlier, and look at that - Platinum membership revenue just paid for the new login system and then some. You might even get a bonus.
When I start working on an unfamiliar codebase now, the first thing I do is look at the dependencies. Which packages and libraries does it use? Where’s it hosted? How does the data access work?
If all I see are two dozen .NET projects with namespaces like MyCompany.Data.TableMapper? That’s a bad day. It’s going to be uphill all the way.
If I see a list of familiar services, names like AutoMapper, IdentityServer, NServiceBus, MassTransit? That’s a good day. I know those projects. I’ve used them, I trust them, I know where to find the docs.
If it turns out there are paid support contracts & maintenance agreements for those dependencies? That’s a great day. It means somebody else knows what’s going on, they’re getting paid to care about my problems, and they’re ready to help if I need it - and I get to focus on the special sauce.

That’s all a very roundabout way of saying that a bunch of us have got together and created a thing - the Use What Works initiative. It’s a collaborative project intended to encourage, and support, more constructive conversations around sustainability in open source software.
The first iteration is live at https://usewhatworks.org/ now. Swing by, take a look, let us know what you think. If you like what we’re doing and want to put your name on it, there’s a link to sign the manifesto. If you’ve got a scenario or a question we haven’t thought of, give us a shout. We’d love to hear from you.
"Don't Reinvent The Wheel: Use What Works" was posted by Dylan Beattie on 09 April 2026
•
permalink
Plus ça Change... Plus c’est la Même Chose
Posted by Dylan Beattie on 08 April 2026
•
permalink
It’s 2007. I’m working on a big rewrite (yes, I know) of a big system; a database-driven web app built in C#. A significant part of the project is just the code to get data in and out of the database. Object-relational mappers are still very much in their infancy; somebody’s ported Java’s Hibernate project to .NET but it’s a little rough around the edges and involves quite a lot of XML. But no worry - I’ve found this amazing tool called CodeSmith. CodeSmith can generate C# code based on your database schema (It still exists - it’s called CodeSmith Generator now). I start using CodeSmith to build a template-driven data access layer. Build the perfect DB schema, generate C# classes for every table with built-in persistence logic… this is gonna save so much time! Don’t worry about the frontend, validation, business logic - once the data layer is in place, all that stuff is gonna be a walk in the park. Let’s get the data layer sorted first. Days pass. Weeks. A month. Two months. I learn about cursors and table locks. I learn about something called a topological sort, to ensure that complex data operations involving foreign key constraints can be applied in the correct order.
I have a couple of prototypes and proofs-of-concept, but nothing that will form the basis of a working product. Nothing to show potential customers or stakeholders… but that’s OK. When this thing works, it’s going to be amazing; just iron out the last few glitches and then everything else will magically fall into place…
…what’s that? I should put the magic tools away, take what I’ve got, and just do the hard work to turn it into something we can actually launch? Don’t be ridiculous. It’s nearly done!
(It wasn’t nearly done. It was never done. The project was cancelled after six months, without ever shipping a single line of code.)
It’s 2026. I’m using Claude Code. I’ve got a prompt file - ANALYST.md - that makes Claude ask me questions about what I’m building. The analyst prompt generates REQUIREMENTS.md. The requirements look good. They look very good; at a cursory glance, they’re better than any set of requirements I’ve ever seen from an actual client. I feed REQUIREMENTS.md to the ARCHITECT.md persona. There are more questions. It creates a SPEC.md and a PLAN.md. I fire up a pair of agents, DEV.md and QA.md. Dev writes the code. QA reviews the code. I run the result. It’s not quite right. I missed something in the requirements. Something obvious, but I didn’t mention it and ANALYST.md didn’t ask. I fire up the analyst again. We have another chat. The requirements get updated. I fire up the architect. It reviews the new requirements. The plan and the spec are updated. The agents go another few rounds. I review the result. It’s still not right. Sure, it says it’s monitoring the filesystem for changes, but it isn’t. Back to the requirements. Did we miss something? Is this a problem for the analyst? Is it architectural? Did the dev agent miss something? Did the QA agent miss something? I’m not sure. Back to the beginning. Round and round and round we go…
I’ve been doing this for a week now.
I have a couple of prototypes and proofs-of-concept, but nothing that will form the basis of a working product. Nothing to show potential customers or stakeholders… but that’s OK. When this thing works, it’s going to be amazing; just iron out the last few glitches and then everything else will magically fall into place…
…what’s that? I should put the magic tools away, take what I’ve got, and just do the hard work to turn it into something we can actually launch?
Don’t be ridiculous.
It’s nearly done!
"Plus ça Change... Plus c’est la Même Chose" was posted by Dylan Beattie on 08 April 2026
•
permalink
How to Find the Stories
Posted by Dylan Beattie on 31 March 2026
•
permalink
One of the folks who joined my presenter workshop last week (which was awesome, by the way!) emailed me this morning with a follow-up question:
I first saw you speak at DDD South-West in Bristol (it was the “There’s No Such Thing as Plain Text” talk), and what stuck with me was your use of stories. They were interesting, quirky, and naturally interwoven with your talk. I try to bring elements of this approach into my own talks by focusing on “what will make people feel something?”, before getting into technical detail.
I work at […] a FinTech that largely serves the Business Travel industry (virtual cards for managing corporate spend). I’m currently searching for stories that (at least loosely) connect to our industry. The aim is to give a talk at one of the business travel conferences over the next year. It will inevitably touch on how AI is transforming our industry, and I’m comfortable talking about how [we are] using AI within our product set, but I’m aware of the importance of framing all of this with a story.
I’ve trawled through many industry “News Outlets” (who are largely just selling adverts with a sprinkling of text) but haven’t yet found the kind of inspiration I’m looking for. I would love to be able to call the talk “The Suitcase that Abandoned its Owner”, or “The $10,000 Uber Trip” - something a bit whacky that intrigues an audience.
First of all: excellent question. Technology as a positive force in the world is most effective when it’s solving actual problems that real people have… and people love to talk about their problems. Especially if they’re interesting problems that happened in strange places. You want engaging stories about weird things that happened to international business travellers? Talk to some people who travel internationally for business. You’re guaranteed to get a couple of good stories – and along the way, you’ll probably learn a lot of really valuable things about exactly what your customers are trying to do, and how your product can help them.
That’s true of just about every industry, by the way: the closer your developers are to your customers, the more likely they are to make the right call when facing any one of the hundreds of decisions that inform the way their software gets built.
So let’s kick things off with two fun stories of my own that I think fit the brief, and who knows, maybe a few of you can share some travel stories of your own in the comments.
The first one happened in Riga, Latvia, back in May 2018. I’d been at DevDays, and was on my way from the conference venue to the ferry terminal ‘cos I was catching the overnight ferry to Stockholm for DevSum. Yandex Taxi had just rolled out in Latvia - kinda like Uber, but built in Russia. Yandex is like Russia’s answer to big tech… it started out as a search engine, added food delivery, ride sharing, email hosting, basically copying all the things coming out of San Francisco but built for the Russian market.
So I get a Yandex Taxi. It works exactly like Uber, except a minute into the ride my phone pings. I’ve just paid Yandex Taxi thirteen cents. A minute later it pings again - 52 cents. 62 cents. 9 cents. 10 cents. 34 cents. Ping, ping, ping, ping, ping… all the way to the terminal.
Weird, huh? But not actually a problem… just weird.
Then a few months later, I was in Moscow for DotNext (this being back in the days when going to Russia to talk about software development was a completely fine and normal thing to do) and I got talking to one of the developers who built the payment system integrations for Yandex Taxi! So I naturally asked “hey, what’s with the weird payment thing?”
Turns out Yandex had a huge problem with people ordering a cab using a virtual payment card and then cancelling the card en route, so the driver would drop the passenger at their destination, the passenger would run away, payment gets declined, driver has no recourse. So the solution? Charge the card every minute - so if a payment gets declined, the driver can kick the passenger out right there. Not a bad solution - but combine it with Monzo, the bank and app I use when I’m travelling, which has realtime notifications every time I make a payment, and you’re getting ping-ping-ping all the way to your destination, for what are often comedically small amounts of money. I guess it all adds up.
The second story happened in 2024, en route from Hungary to Lithuania. I’d been in Budapest speaking at Liferay DEVCON, and had a very tight connection via Munich on my way to Vilnius for BuildStuff… first flight was delayed. A 90 minute connection became 60, then 45, then 30… by the time we landed, I had six minutes to make the connection… but, whether by accident or design, we parked right next to the gate for my connecting flight, no passport control, and I made it. My luggage did not.
No big deal. I’ve got an Apple Airtag tracker in it. When we landed in Vilnius three hours later, I could see my luggage was still at Munich airport, so I filled out all the forms and whatnot, told them where I was staying; no problem. Next day after breakfast, I checked Apple’s “Find My” app… and there was my luggage, somewhere in Bavaria, in the middle of the forest, miles away from anywhere.
Apple’s “find my device” network basically turns every iPhone on the planet into a node in a huge geolocation network, so I’m guessing what happened is an iPhone on board the plane could see my luggage’s tag, and it was connected to the inflight wifi (or a cell tower) just long enough to register a location as the plane was flying over that particular spot. (Yes, I know the Steigerwald Nature Park is not actually on the way from Munich to Vilnius. No, I don’t know either.)
But according to the app, my suitcase spent a nice relaxing morning chilling out next to a little lake in a German national park, and then teleported itself to Vilnius airport, and was delivered to my hotel a few hours later.
What are your weirdest tech travel stories, dear readers? Share them in the comments (yeah, I have comments now!) and who knows, they might end up in a conference presentation.
"How to Find the Stories" was posted by Dylan Beattie on 31 March 2026
•
permalink
Look, Sir! Comments!
Posted by Dylan Beattie on 31 March 2026
•
permalink
So I got an email earlier, which you’ll find out about in my next post, which made me think “hey, the reply to this would make a great blog post”, and then I thought “…and it would be even better if people could add their own comments to it”, and so I plugged in the rather excellent Giscus, so now you can leave comments on my blog posts.
Go on, try it. It’s all running straight off GitHub Discussions, so there’s no database; it’s all client-side code, so there’s no server in the loop (well, there is, but it’s not mine so I don’t have to worry about it.)
"Look, Sir! Comments!" was posted by Dylan Beattie on 31 March 2026
•
permalink
From the Twitter Archives: npm install skynet
Posted by Dylan Beattie on 30 March 2026
•
permalink
I originally posted this as a Twitter thread in March 2018. It went viral, probably because Charles Stross quote-tweeted it with the comment “This thread. You read!” - yes, _that _Charles Stross. 😮 It was at https://twitter.com/i/web/status/976852582084808704, and I bookmarked it ‘cos CHARLES STROSS TOLD PEOPLE TO READ MY STORY, but he has since deleted his Twitter account and so it’s not there any more. I don’t know if Mr Stross is personally responsible for the page that now lives at https://x.com/cstross, but I heartily endorse the sentiment expressed thereon: “Fuck You, Elon Musk”)
When npm was first released in 2010, the release cycle for typical nodeJS package was 4 months, and npm install took 15-30 seconds on an average project. By early 2018, the average release cycle for a JS package was 11 days, and the average npm install step took 3-4 minutes.
Extrapolating from historical data, scientists predicted that on 8th November 2019, the release cycle for most JS dependency packages would become shorter than the npm install time for a typical ‘hello world’ app or small blog engine.
Futurists were already talking about the ‘nodularity’ - a cultural event horizon beyond which it was impossible to make any rational predictions. With projects already out of date before they’d even finished building, software development as we knew it ceased to exist.
Most projects perished. A few hardy survivors worked out how to harness the power of the infinite restore loop and run logic within the installers themselves. Packages became self-replicating, self-modifying payloads of behaviour and intelligence.
Every developer who typed ‘npm install’ unwittingly slaved their workstation to the npm hivemind. Entire availability zones were consumed by node_modules and its relentless lust for power. Websites, APIs, databases; nothing was safe. Entire platforms were DDOSed to oblivion.
Finally, a few brave engineers penetrated the npm root servers. Disguising their payload as a routine documentation update, they bypassed key signing procedures and managed to inject a self-destruct routine into the ‘prepare’ scripts for left-pad…
It was far from perfect, but it was enough. Sysadmins everywhere seized the opportunity to install firewalls and block npm traffic, in a massive, global, coordinated effort - managed entirely via SMS messages and telex machines, Within 24 hours, the cycle was finally broken.
And as developers stumbled, bemused and blinking into the light of a new day, they were astonished to find some sites were still up. Perl, ASP, cgi-bin - relics from the very dawn of the web, still standing proud, monuments to a bygone age.
npm was isolated. The last running instance was hot-patched into a Docker container image and migrated onto a Raspberry Pi locked in a steel vault beneath the Arctic permafrost, its only connection to the outside world an air-gapped analog video feed of its terminal output.
As the software industry gathered and regrouped - older, wiser, warier, and absolutely definitely convinced that strong typing was a good idea after all - npm blinked away quietly to itself, alone in the silent, steel darkness.
Time passed. Months, years, decades. The dark days of npm and nodejs were all but forgotten… until one fateful morning, a security researcher, digging through the archives, fired up the video feed from the npm vault, just to see if anything was still there…
"From the Twitter Archives: npm install skynet" was posted by Dylan Beattie on 30 March 2026
•
permalink
1 comment — read or join the discussion