Monday, April 15, 2013

The Effect of Increased Home Bandwidth on DDOS Attacks

The Effect of Increased Home Bandwidth on DDOS Attacks


If the recent DDOS attacks have taught us anything, it's that poor security makes them possible, and that they are only going to get worse.  So why is it getting worse and what can we do about it?

The Problem

One of the main reasons these DDOS attacks are so effective remains amplification.  Specifically DNS DDOS Amplification.  A standard DNS query is around 70 to 80 bytes packet with a 150-300ish byte answer.  While that isn't that great, it's still a doubling of data for a fairly small packet that can be sent out rather quickly.  The real problem are zone transfers.  That same 70-80 byte packet can balloon up to and beyond a 3k byte packet return.  The RDLENGTH field defines a 16 bit length in bytes for the RDATA field, so you can see that a reply could be quite large.  All an attacker needs is an open resolver that allows zone transfers and the ip of the target.

While this is fairly common with DDOS attacks right now, I believe that the significant increase of bandwidth in the home will have a large effect on attacks of this nature.  The rise of Gigabit fiber to the home is really a huge leap forward for home bandwidth that we haven't seen since broadband made it's way into households.  While these gig level services are fairly limited right now, they are expanding.  Google is expanding from KC to Austin, and there are a number of utility providers and Co-Op providers providing similar service.

One ISP saw around 300 gb/s traffic during the recent Spamhaus attack.  This was mostly from DNS amplification traffic.  Amplification attacks are currently popular because of the relatively low speed of home users compared to some of  the open dns servers on the web.  How many bots with gigabit home service would it take to equate 300 gb/s of traffic?  Lets look at some variables first.  The average bot is probably a poorly patched XP machine with a 10/100 nic, but to advance the argument lets say our new botnet is made up of mostly Windows 7 machines with a gig nic connected right to their gateway.  With losses they should be able to pump out around 850 mb/s of traffic to their ISP.  What about the ISP?  Will they be able to upstream 300 gb/s of traffic?  Some of the smaller community owned providers might have a problem with this, but the larger providers like Google and Verizon probably will.  The oversubscription ratio and the nature of their subscriber infrastructure will play a huge part of this.  Lets say it will allow that much data upstream though.   300/.85 = about 353 bots to generate that much traffic.  Do you think that botnet herders will be able to round up that many bots?  I think so.  You might even be able to find that many volunteers through something like the Low Orbit Ion Cannon.

I believe that the large increase in home speed along with the reduction of available dns servers will cause a shift from DNS Amplification attacks to more traditional DDOS attacks.


The Solution

So how do you prevent this from getting out of hand?  It will have to take teamwork, advances in technology, and tougher enforcement of laws.  The tough part about teamwork is that we have a proven track record of being unable to rely on anyone else to think of security when deploying services, or even have basic patching of DNS and web servers.  So we will all have to apply a Defense In Depth strategy to fix this.

First off users need to at a minimum start patching their home computers and stop clicking on shady links.  This will help prevent these botnets from forming in the first place, but your typical home computer user can't be trusted to do even these basic steps so what can their ISP's do to help?

ISP's already have some measures in place.  Excessive bandwidth may trigger a loss of service or a call from your local help desk.  This doesn't always happen, and the rise of voice and video services make it harder to fingerprint good data from attack data.  ISP's could (and I believe some do) restrict DNS queries to anything other than their own DNS servers, although this will prevent some legitimate users from using the excellent services like Google DNS and OpenDNS.  In lieu of totally blocking off carrier DNS, they could rate limit offsite queries.  A reasonable home network shouldn't be making DNS Zone transfers or making excessive DNS queries for hours on end.  Some ISP's already do some detection and quarantine of botnet or virus infected users, but false positives (and even true positives) can make for an angry user base.

How about all of those open DNS resolvers on the internet?  Sadly, many admins might not even know that they are an open resolver.  Any admin in doubt can check out OpenResolverProject.org to see if they are running an open resolver that might be contributing to the problem.  If they are in doubt of why they should even care Michael McNally on ISC.org wrote an excellent article on the subject.  Being an open resolver is one thing, but allowing zone transfers I believe is a larger issue.  In general though, DNS should only be provided to the networks it needs to.  If you are an ISP, why do you need to provide name resolution to someone you don't provide service to?  If you are a free DNS provider, don't allow zone transfers to users, and consider rate limiting rules.  One of the best things you can do is source validation.  If the requester isn't making the request, then there is no need to send them a ton of data.

If you run a major website you shouldn't trust that anyone is looking out for you, but you.  Anycast services like CloudFlare and Prolexic combined with some sort of CDN can largely mitigate a DDOS and keep your site up, but it's not going to do much for the little guys who can't afford such services.  All websites can contribute though.  Following  the best-practices on StopBadWare.org will help prevent you from becoming a distributor of malware that can help form these Botnets.


Lets wrap it up!

The ever increasing speed and power of home computers along with their higher and higher connection speeds make them a valuable asset to botnet herders and a pita to everyone trying to stop them.  That combined with the ever increasing complexity of networks and services makes this a hard problem to stop.  New technologies like IPv6 and DNSSEC can help, but until they are fully implemented they won't help.  Their increased complexity may have their own issues.  It all boils down to everyone doing their part to the best of their abilities within their area of responsibility.  That will greatly mitigate the current problems and help prevent future issues.  Left unchecked and we are all in for a world of hurt.

-Sean

No comments:

Post a Comment