Wire-speed eBPF/XDP firewall with automatic port whitelisting.
~40–65 ns/pkt · 28× less CPU · TOML config · protocol plugins.
eBPF / XDP Kernel ≥ 4.18 IPv4 + IPv6 systemd · OpenRC TOML Config Slot Plugins MPL-2.0 ★ —
$ curl --proto '=https' --tlsv1.2 -sSfL https://raw.githubusercontent.com/Kookiejarz/Auto_XDP/refs/tags/v26.4.23a/setup_xdp.sh | sudo bash

Drops threats before the
kernel even sees them.

XDP hooks into the NIC driver — the earliest possible point in the Linux packet path. Unlike iptables or nftables, packets are evaluated before the kernel networking stack, at wire speed. Auto XDP adds an auto-sync daemon that watches which ports are actually open and updates the firewall rules in real time. Zero manual config.

Packet path comparison
TRADITIONAL AUTO XDP NIC Driver hardware RX CPU Kernel Stack socket buf · TCP/IP CPU iptables netfilter · late check DROP full kernel path wasted on every blocked packet NIC Driver hardware RX ⚡ XDP Hook driver level · pre-stack DROP ≈0 CPU PASS Kernel Stack legit traffic only Your App SSH · nginx · postgres drop before stack = zero wasted work
CPU reduction under flood
85.9% → 3.0% softirq
0ns
Per-packet latency
measured on real hardware
Configuration required
auto-sync handles the rest
Live Packet Decision Path — XDP Firewall Core
🌐 Internet NIC Driver eth0 / enp3s0 — hardware RX queue ⚡ XDP HOOK — driver level XDP FIREWALL CORE L3 Pre-checks: VLAN · Fragment VLAN nesting > limit → DROP · IPv4 MF/offset · non-initial IPv6 frag → DROP DROP Bogon Filter (if enabled) bogon/reserved src → DROP · legit → continue to classifier DROP Protocol Classifier ETH → IPv4/v6 → L4 TCP UDP ICMP ARP / NDP proto-41 (SIT) MAP sit4_endpoints HASH · SIT endpoints hit→PASS DROP ── TCP PATH ── Malformed Packet Check NULL · XMAS · SYN+FIN · SYN+RST RST+FIN · bad doff · port=0 DROP MAP ACL · Trusted Src acl_map CIDR match → PASS trusted_ipv4/v6 LPM_TRIE → PASS hit→PASS SYN? yes MAP tcp_whitelist ARRAY[65536] SYN Rate Limit per-IP · per-port window MAP tcp_conntrack INSERT LRU_HASH[262144] no (ACK) MAP tcp_conntrack lookup (ACK flow) CT_MISS → DROP ICMP Token Bucket 100 pps burst · per-sec refill PASS DROP ARP/NDP PASS ── UDP PATH ── MAP udp_conntrack reply-tuple lookup hit→PASS MAP ACL Rules acl_map · match → PASS hit→PASS MAP trusted_ipv4/v6 LPM_TRIE · CIDR match hit→PASS MAP udp_whitelist ARRAY[65536] · server ports DROP UDP Rate Limit per-src · global sliding window XDP_PASS → kernel network stack XDP_DROP zero CPU overhead Kernel Network Stack TCP/IP · socket layer Your Application SSH · nginx · postgres … TC EGRESS tc_flow_track outbound SYN/UDP → conntrack seed MAP tcp_ct4 / udp_ct4 conntrack seed write seeds
Incoming packet
XDP_PASS
XDP_DROP
ICMP / rate-limited
ARP / NDP
Features

Everything you need.
Nothing you don't.

Auto XDP combines wire-speed packet filtering, zero-config port sync, and a clean operator CLI into a single cohesive firewall daemon — designed for Linux hosts that can't afford to lose a microsecond.

Wire-speed XDP

Filters at NIC driver level before packets enter the kernel stack. ~40–65 ns per-packet latency, 28× less CPU under flood.

XDP_DROP
Auto Port Sync

Daemon watches netlink for socket changes and updates BPF maps in real time. Zero manual firewall config.

zero config
TOML Config

Human-friendly /etc/auto_xdp/config.toml. Configure rate limits, trusted CIDRs, ACL rules, tunnels. SIGHUP hot-reload.

SIGHUP reload
Protocol Plugins

Loadable BPF slot handlers for GRE, ESP, SCTP, or custom protocols. axdp slot load gre or point at your own .o file.

bpf_tail_call
6in4 Tunnel Guard

proto-41 (SIT) traffic accepted only from configured sit4_endpoints. All other proto-41 sources dropped at line rate.

proto-41
IPv4 + IPv6 Conntrack

TCP SYN creates tracked state. TC egress records outbound flows so return traffic passes without reopening port holes.

tc egress
Per-Source Rate Limits

SYN and UDP rate limits keyed per source IP, configurable by process name or IANA service. Aggregate caps available.

anti-brute-force
nftables Fallback

When XDP cannot attach, the same control plane drives a dynamic nftables ruleset. Auto port sync keeps working.

graceful degradation
axdp CLI

Terminal control for everything: axdp stats · axdp acl add · axdp trust · axdp slot load · axdp under-attack on · axdp log-level

operator CLI

Architecture

Auto XDP runs entirely in the Linux kernel fast path. XDP hooks at the NIC driver level drop malicious traffic before it ever touches the network stack, while TC egress tracks outbound connections to seed the conntrack allowlist — all coordinated through pinned BPF maps shared between kernel programs and userspace daemons.

Full lifecycle: install → boot → kernel plane → BPF maps → userspace

Auto XDP system architecture diagram
BPF Maps

Auto-sync port whitelist.

The xdp_port_sync daemon watches listening sockets in real time using Linux Netlink Process Connector. When a process opens or closes a port, the BPF maps are updated within milliseconds — no manual firewall rules, ever.

xdp_port_sync.py
tcp_whitelist ARRAY[65536]
udp_whitelist ARRAY[65536]
Performance

Same flood. 28× less CPU.

Tested with a high-performance AMD EPYC™ 7Y43 attacker generating ~367k PPS / 188 Mbps of UDP flood against a 1 vCPU AMD Ryzen 9 3900X target over the public internet.

Auto XDP OFF
0%
softirq CPU — kernel processing every packet
Auto XDP ON
0%
softirq CPU — packets dropped at NIC driver level
How to reproduce:
Load modprobe pktgen on the attacker, configure a 64-byte UDP flood (pkt_size 64, clone_skb 100, count 10000000), and compare top softirq usage with sudo axdp watch showing live counter deltas on the target.
Demo

See it in action.

VIDEO COMING SOON
Live install + flood test demo
Origin

Why I built this.

Personal cloud instances are constantly scanned and probed. Every day, bots hammer SSH, random high ports, and anything that looks like it might be an exposed service. Traditional firewalls like iptables work — but they process packets after the kernel networking stack, adding latency and CPU overhead. Worse, they require manual port management: every time you start a new service, you have to remember to open the firewall.

I wanted something that hooks in at the NIC driver level — the earliest possible interception point — and manages itself. When you start a new process that binds a port, the firewall should already know. When that process exits, the port should close automatically.

The result is Auto XDP: an eBPF/XDP firewall that sits at wire speed and a userspace daemon that keeps it honest. One install command. Zero ongoing config. And if your kernel doesn't support native XDP, it falls back to nftables automatically — so it works everywhere.

Design principles
Wire speed first. XDP at the NIC driver, before any kernel processing.
Self-managing. Daemon watches sockets via Netlink, syncs BPF maps in real time.
🔁 Graceful fallback. nftables backend activates automatically when XDP can't attach.
🛡 Defense in depth. Conntrack, rate limits, malformed-packet drops — layers, not luck.