Guest post from Mythic Beasts: how we dealt with those DDoS attacks

Do you remember the distributed denial of service (DDoS) attacks this website was undergoing a few months ago? They made the news (partly because it was just so bizarre to see someone attacking an educational computing charity) – if you want to refresh your memory, see this, this, or this.

Pete Stevens, who runs marathons and our hosting company, Mythic Beasts, thought you’d be interested in what he’s been doing to try to ensure this can’t happen again. (Famous last words, Pete.) Here’s what he did. Over to Pete!

In the past we’ve had occasional trouble with denial of service attacks against the Raspberry Pi website. In particular, simply overflowing us with traffic has proved not that difficult – the server only has a 1Gbps uplink. When your admin (me) cocks up, it turns out you can saturate a core calculating syncookies leaving the other cores idle because he should have configured IRQ balancing properly.

Pete, pounding the pavement. Click the image for what is possibly the funniest local news story to come out of Cambridge in the last decade.

We briefly investigated cloud based DDoS protection which we still hold in reserve, but it has a habit of declaring that Liz can’t post things because apparently she’s a spambot. We also had to switch off IPv6 access to the website to use them, which, for an educational project, was unfortunate as there is going to eventually be a large network transition to IPv6 and allowing people to learn about it and use it is desirable.

So we’ve scaled out the hosting infrastructure out to a distributed cluster of machines. We’ve installed four additional little dual core machines, two in our Telecity Sovereign House site, two in our Telecity Harbour Exchange site. Each of these runs a load balancer and forwards connections back to the main webserver. This means the inbound load is now shared over 4 separate 1Gbps links and there’s rather more CPU available to calculate syncookies when required and rather more bandwidth to saturate.

We load balance over the load balancers using DNS round robin, as you can see from our public DNS:

$ dig www.raspberrypi.org AAAA +short
lb.raspberrypi.org.
2a00:1098:0:80:1000:13:0:5
2a00:1098:0:80:1000:13:0:6
2a00:1098:0:82:1000:13:0:5
2a00:1098:0:82:1000:13:0:6

$ dig www.raspberrypi.org A +short
lb.raspberrypi.org.
93.93.130.39
93.93.130.214
93.93.128.211
93.93.128.230

Now, everybody knows that this is a stupid way of load balancing, and you don’t get anything like even usage across your sites. This isn’t even slightly born out by the bandwidth figures for the last few days:

93.93.128.211 347.04 GB
93.93.128.230 341.61 GB
93.93.130.39 349.58 GB
93.93.130.214 347.88 GB

That’s agreement to within 2%, which is a pretty even split. So much for commonly held wisdom, we prefer science.

We’ve set the entire internal network up with IPv6. So when you connect to one of the front end machines, it’ll connect back to the main webserver over IPv6. One of the reasons for this is so we’re now running mission critical services over IPv6 – a move to IPv6 worldwide is happening at a glacial pace, and we want to make sure that our support works well. Having an angry Liz phone you up if it doesn’t is a very effective motivator.

You may have seen odd forum and comment issue while we were setting this up. One of the forum spam filters allowed filtering people based on source IP address. The move to the new setup means that clicking the filter by IP address feature resulted in dropping all new comments from that load balancer – a quarter of our traffic. *Oops*. We had to fix that to read the forwarded-by headers.

Now of course the real question is, why aren’t we fronting the site with a massive cluster of Pis? Testing with hping3 suggests that a Pi starts to struggle at around 2500 syns/sec. The front-ends we have are absolutely fine at at 50,000 syncs/sec (reading roughly 10% cpu), so with four of them we can probably handle around 1,000,000+ syns/second. That’d require 400 Pis to keep up, so it’d be a very very large cluster of Pis, not to mention 5 switches in each site.

Of course a very stern warning has been given out to people who have access to the front end machines – not only can they receive a million syns/sec, they can also send them, and that could seriously upset other internet users if it was directed at them.

Now, a side effect of this scale-out is we’re left with a bunch of machines that have a reasonable amount of excess CPU. Eben has *strong views* about wasting CPU cycles, it makes him very sad. So we’ve put them to use.

Rob Bishop and Gordon Hollingworth at Raspberry Pi spend quite a lot of time building software. Compiling it is time-consuming, and their laptops get hot and make fan noises. So we’ve installed a set of dual core VMs on the five core servers running under KVM. When everything is fully operational the software team can kick off a build from the master VM which will then use distcc to farm out the compile across all five machines. This means there’s effectively 10 cores available most of the time for building software. When the website gets busy, the lower priority VMs slow down and hand the cycles back to the load balancer/Apache/PHP/MySQL.

Now, the Raspberry Pi is an educational project. It’s not just about educating children: adults still need to learn things, and that includes me. We’ve run many dual stack IPv4/IPv6 machines before, but we thought we’d try IPv6 only machines and discover the difficulties in order to improve our support for IPv6. So the distcc VMs are IPv6 only – they can’t access anything on the internet that isn’t accessible over IPv6. In reality this means they can see Google, Facebook, lots of mirror servers and a small fraction of other sites.

In the process of setting this up I discovered that I was unable to get the Debian Squeeze network installer to install from an IPv6 only network, so I had to do the initial install to the VMs from a full install image rather than the cut down one. I then realised that Mythic Beasts still doesn’t have an IPv6 aware resolver yet, which we need to sort out, so I had to use Googles public resolver. This is still on my todo list along with full DNSSEC resolver support.

Happily Debian appears to work fine with IPv6 only. The mirrors are v6 enabled, so the VMs recieve updates and can install packages fine, and so far it appears to be going well.

There’s still some things to do and questions to answer: should we move apache/php processing to the front end nodes? Will WordPress Supercache and the other plugins cope in a distributed environment? Will file uploads still work? Can we solve that with NFS? Does NFS even work over IPv6? Should we install a varnish cache on the front end nodes and disable WordPress Supercache? Should we do both? Will it confuse people if we have two layers of caching that expire at different times? Is that better than what we have now? Instead of having tcp-syn cookies on the whole time we could only enable them when under attack. Have we made a dreadful mistake with the build VMS, and is it all going to go offline when Rob tries to compile OpenOffice? Should we stop worrying about all of these questions and instead work out whose job it is to buy the first round at the Cambridge beer festival?

If this is the sort of thing you’d find interesting, and you would like to be paid to solve exactly these sorts of questions, Mythic Beasts is recruiting.

We’re looking for both junior and senior people, we very strongly like bright motivated people who get things done, and we’re not overly impressed by certifications. We’d really like a full time person or two but are not averse to taking on summer or gap year students providing they’re smart and they get things done.

37 comments

Emmet avatar

Thanks for that news article.

Paul avatar

Why not try nginx as your front facing webservers/cache/loadbalancers? I run that configuration and its very powerfull and easy to configure.

Peter Stevens avatar

Could you explain further how using ngnix solves the issue of 2Gbps of syn packets not fitting into a 1Gbps ethernet link?

John avatar

This is a fascinating article particularly about using v6, many thanks Pete!

Nginx obviously won’t increase the line capacity but with the nginx cache (and not Super Cache etc) it will enable much lighter weight handling of drive-by traffic.

See the Harvard Law article among others.

Nev avatar

Sadly I’m just not fast enough to work for you :-(

psergiu avatar

I think that job.pl cgi’s timing routines are a bit off.
Or does Mythic Beast really only wants people who can read, compute, write & submit the answer in under 3-4 sec ?

liz avatar

No; I think they want someone who’s good at thinking and beating puzzles. I promise you that’s not an error.

Marcell Marton avatar

You’re right. :)
Apparently its quite an easy one.

(I really like solving things like this.)

AndrewS avatar

(I really like solving things like this.)

Me too :) Managed to solve the puzzle, even though I’m not looking for a job!

Ben avatar

You can be faster if you’re a lazy person.

I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it.

spaceyjase avatar

Ah, spare CPU cycles would be better spent on the game theory of who-buys-the-first-round conundrums.

Michael Bolanos avatar

Great article.

http://www.mythic-beasts.com/cgi-bin/job.pl

funny… can you use some help from the U.S.?

Michael

Remsnet Admin - Horst Venzke avatar

Hello Liz,

Practicaly , move over to hidden primary with dns sec & ACL shuold solve the issue.
Access /ACL rules for the visible systems to hidden primary and back.

The major dns shuold be hosted at your major DNS Provider.

Myself use the CHEAP Dyndns Professional service for those

Hope this helps.

Peter Stevens avatar

Could you explain further how DNSSEC solves 2Gbps/sec of syn packets failing to fit down a 1Gbps ethernet link. I’m really interested to know how it works.

Marius Schiffer avatar

Try widening the network cables with a screwdriver. More packets will fit through -> Problem solved! :P

Jonathan Chetwynd avatar

LIke Paul, I’m also concerned at the LAMP description, rather last century…

why not use nginx, nodejs, redis and similar?

wfm

Peter Stevens avatar

I await your tutorial on how to install wordpress under node.js.

AndrewS avatar

I’m also concerned at … rather last century

*sigh* Newer doesn’t always mean better!

Tried and tested is often more desirable, especially with things on this scale. I’m sure Pete knows what he’s doing…

scot avatar

maybe the person got kicked off the forums, or they couldn’t get their pi to work, and felt like they were cheated somehow?

liz avatar

If someone responded like that to being kicked off the forums, it’s a good demonstration that our mods make the right decisions, wouldn’t you say?

Max avatar

Very interesting article – I am myself interested in high performance web acceleration techniques, and have experimented with Varnish and CDNs.

This MIGHT help your 2 / 1 GBps saturation problem: farm out the static content (images, JS, CSS, …) to a CDN / your own servers set up as static content servers. I’m sure the guys at NetDNA would be very nice about supporting the Foundation (they’ve been so to me).

Please note, that I haven’t checked all of the site, just the Raspberry Pi logo on top.

Oh, and Varnish is great fun, too! But the debugging gets more complex, obviously :-)

Chaz Molegrips avatar

Two words: Microsoft Security Essentials.

Since downloading it I’ve never had a DoSS attack. (It’s free, you don’t even need a TORRENT.)

David Refby avatar

isn’t that three words?

AndrewS avatar

Lol, I can’t tell if that’s naivety or sarcasm ;-)

paddy gaunt avatar

Rounds at beer festivals are more ‘what’ questions than ‘who’ questions. I think you should switch to worrying about that NOW.

AndyS avatar

I thought the first rule of security is You Don’t Talk About Security (in public).

Ben avatar

The second rule is:

Security through obscurity is no security at all

Peter Stevens avatar

Given you have to publish all the load balancer IP addresses in the public DNS it’s not that much of a big secret.

Alastair Barber avatar

Best job application ever – a great way to end the day! :D
Unfortunately I’m not looking to move right now though…

Travis avatar

Not normally one for a “me too”, but me too. Very satisfying.

The casual mention about the wonderful Cambridge Beer Festival next week has my loyalty to my current job wavering though. If it’s weather like last year again I might do more than waver…

AndrewS avatar

One of the perks of being self-employed – easy to take an afternoon off! :)

Peter Stevens avatar

Time off? Vital customer meeting in a client office,

pete@small:~$ whois cambridge-camra.org.uk

Registrar:
Mythic Beasts Limited [Tag = MYTHIC-BEASTS]
URL: http://www.mythic-beasts.com

AndrewS avatar

Ha :-)

Jim Manley avatar

I’m not sure whether the references to “moving to the cloud” mean what I’m thinking, but I would distribute the site among a number of affiliated servers all over the world at what most practitioners call the “edge of the cloud”, e.g., using proxy servers. The idea is to make a DDOS attack impossible by making no “there” there, i.e., there is no publicly-accessible central server or cadre of servers in one physical location and at one IP address.or subnet of addresses. Instead of requests going all the way to your servers, they should be served by the proxy server that is closest (in Internet “distance”) to the Foundation pages’ requester (or source of syns, etc.). When updates to the site are made, they are pushed to the remote proxy servers so that the master copy is never exposed to the public.

For starters where proxy servers could be located, I would approach all of the major universities with significant Internet services that deal in this sort of thing on a daily basis, such as MIT, the UC system, Stanford, Carnegie Mellon, and the foreign (to me) equivalents, which I’m guessing would include Cambridge in the UK. I would also ask the corporate friends who do this sort of thing for a living to contribute, starting with our mutual friends Google and even Microsoft, Yahoo, Apple, etc. The carrot would be that they would be allowed to advertise their support (let’s see how serious code.org really is about supporting computing education at the grass roots level). I would also ping the news media sites, starting with good ol’ Uncle Rupert – I hear he has change in his pocket from lunch that could fund the entire shebang. Perhaps Bill and Melinda would also like to help by convincing Microsoft and others of the benefit of the PR.

In fact, I would expand this to be a sort of NATO of computing education sites – the more the merrier – all for one, and one for all. The more resources that can be made available in the number of participants, the lighter the load will be on any single site, and with automatic failover to remote sites, the benefits to anyone needing defense against a DDOS would be obvious.

This is especially true of the Foundation site because it’s not really that challenging of a content source with only one to a few stories per day and upwards of only a few hundred response posts to a typical story (but usually only dozens).

If at all possible, I would wean yourself from the WordPress boat anchor. Even an HTML/CSS/PHP/MySQL-driven lowbrow solution would run circles around the kludge that WordPress and its troublesome plug-ins present, and it would be completely platform agnostic as long as you stick to the actual W3C standards.

I have a patent in this area and I am available to start working on this as early as next week, and no later than the second week of June once school is out. I’ll be more than happy to talk to Eben about this at/after the Maker Faire, so now you have another reason to visit the aquarium so we can talk fish, otters, turkey, etc. :D

Michael Kent avatar

Thanks Guys,

I enjoyed solving that one :)

Silas S. Brown avatar

You don’t have to do this: “Instead of having tcp-syn cookies on the whole time we could only enable them when under attack”. The syn-cookies specs say they are switched on by the kernel only when under attack anyway. Leave them enabled and they’ll be switched on and off as needed; you don’t have to do the kernel’s job yourself :-)

macs avatar

If not over-budget, filtering the incoming traffic through arbor filters is very effective.

Comments are closed