Software dev fortifies his blog with 'zip bombs' — attacking bots meet their end with explosive data package

a B-52 Stratofortress dropping bombs during the Vietnam war
(Image credit: Shutterstock)

Programmer Ibrahim Diallo runs a blog hosted on his own tiny server, and based on his experience, he knows that most of his traffic are bots that troll the internet to find content. Many of these bots are harmless, but there are a few that attempt to hijack his system by injecting malicious attacks or probing for a response. When that happens, Diallo says on his blog that he serves up a hot zip bomb that will expand to a thousand times its original size — and crash the bot’s server.

Zip bombs are tiny, compressed archives that hide a massive file size. An egregious example of this type of sneaky file is this 46MB archive that turns into a massive 4.5 petabyte file, overwhelming the resources of most computers. These are considered to be malware, as they’re designed to disable a target system by crashing it. However, Diallo has flipped the script and is now using zip bombs as a way to defend against malware attacks.

Diallo says he made a 1MB file that decompresses into 1GB to disable bots trying to break into his system. He also has a 10MB-to-10GB compressed file for bots with more resources, ensuring that their memory is overwhelmed by this massive archive.

This is how this defensive bombing system works: when Diallo detects an offending bot, his server returns a 200 OK response and then serves up the zip bomb. The file’s metadata tells the bot that it’s a compressed file, so it will then open it in an attempt to scrape as much information as possible. However, since the file is at least 1GB when unpacked, it will overwhelm the memory of most simple — and even some advanced — bots. If he faces a more advanced scraper with a few gigabytes of memory, he’ll feed it the 10GB zip bomb, which will most likely crash it.

If you want to try this system for yourself, Diallo outlines how you can create your own bot-targeting zip bomb on his blog. He notes that you should be careful when doing that, though, as you can potentially self-detonate (i.e., accidentally open the zip bomb), and crash your own server. They’re also not 100% effective, as there are ways to detect zip and disregard zip bombs. But for most simple bots, this should be more than enough to cause its server to freeze and take it out — at least until its system is restarted.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Jowi Morales
Contributing Writer

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.

  • King_V
    Did anyone else giggle with near maniacal glee upon reading this? It can't just be me, can it?

    The very concept is just delightful to me.
    Reply
  • Mindstab Thrull
    King_V said:
    Did anyone else giggle with near maniacal glee upon reading this? It can't just be me, can it?

    The very concept is just delightful to me.
    Oh, you want to take something of mine without asking first? CHEW ON THIS! <\worms_armageddon>
    Reply
  • Grobe
    This is like trolling scammers - not only does it gives much fun, but also protects other by nerfing the offender.
    Reply
  • woiqwiecrwnsweyvja
    Header pre-scan (compression-ratio + file-count + uncompressed-size)
    Byte-cap during extractProcess + timeout or resource limitsIt's like both the author and the article writer have never actually solved a problem before.

    Even the most basic timeout would stop something like this.

    from multiprocessing import Process
    import zipfile, shutil

    def do_extract(src, dest):
    with zipfile.ZipFile(src) as zf:
    zf.extractall(dest)

    def extract_with_timeout(src, dest, timeout=10):
    p = Process(target=do_extract, args=(src, dest))
    p.start()
    p.join(timeout)
    if p.is_alive():
    p.terminate()
    p.join()
    raise RuntimeError("Extraction timed out")
    Reply
  • dalek1234
    I remember doing that to myself at work back in the day by accident. I was testing what happens when you try to load something large into memory, so I created a three-dimensional array, 1000x1000x1000 bytes in size, not realizing that the array was bigger than the amount of RAM I had. My computer froze while instantiating the array. So yeah, I bombed myself.
    Reply
  • Mindstab Thrull
    woiqwiecrwnsweyvja said:

    Header pre-scan (compression-ratio + file-count + uncompressed-size)
    Byte-cap during extractProcess + timeout or resource limitsIt's like both the author and the article writer have never actually solved a problem before.

    Even the most basic timeout would stop something like this.

    from multiprocessing import Process
    import zipfile, shutil

    def do_extract(src, dest):
    with zipfile.ZipFile(src) as zf:
    zf.extractall(dest)

    def extract_with_timeout(src, dest, timeout=10):
    p = Process(target=do_extract, args=(src, dest))
    p.start()
    p.join(timeout)
    if p.is_alive():
    p.terminate()
    p.join()
    raise RuntimeError("Extraction timed out")
    If you're familiar with programming, sure. Don't forget you can be interested in one aspect of a field - like computers - without having much knowledge in other aspects. Or you could be like me who... I guess if you've ever played Dungeons and Dragons, it's like a Bard, with a bunch of random knowledge in a bunch of random places. Also, the point of what they did is to shut down the crawlers. They probably have stuff to handle regular errors and just move on.
    Reply