PAQET
Packet-Level Proxy with KCP Protocol
Bypass the OS TCP/IP stack with raw sockets and libpcap. Optimized for gaming, VPN services, and high-performance networking.
┌─────────────────────────────────────────────────────────────┐ │ CLIENT SERVER │ │ ─────── ─────── │ │ │ │ Application ──► SOCKS5/Forward ──► SMUX ──► KCP ──► pcap │ │ │ │ Raw TCP Packets (crafted) │ │ ◄────────────────────────────────────────────► │ │ │ │ pcap ◄── KCP ◄── SMUX ◄── SOCKS5/Forward ◄── Application │ └─────────────────────────────────────────────────────────────┘
Architecture Overview
paqet is a packet-level proxy that bypasses the OS TCP/IP stack using raw sockets and libpcap.
┌─────────────────────────────────────────────────────────────────┐ │ CLIENT SERVER │ │ ─────── ─────── │ │ │ │ Application ──► SOCKS5/Forward ──► SMUX ──► KCP ──► pcap │ │ │ │ Raw TCP Packets (crafted) │ │ ◄────────────────────────────────────────► │ │ │ │ pcap ◄── KCP ◄── SMUX ◄── SOCKS5/Forward ◄── Application │ └─────────────────────────────────────────────────────────────────┘
Key Components
- pcap: Captures/injects raw Ethernet frames via libpcap
- Packet Crafting: Builds TCP packets with forged headers
- KCP: Reliable UDP protocol with ARQ and congestion control
- SMUX: Multiplexes multiple streams over one KCP connection
- SOCKS5/Forward: Application-layer proxy interfaces
Configuration Reference
Complete Configuration Structure
role: "client" # "client" or "server"
log:
level: "none" # none, debug, info, warn, error, fatal
network:
interface: "eth0" # Network interface name
guid: "" # Windows NPF GUID (Windows only)
ipv4:
addr: "192.168.1.100:0" # IP:Port (port 0 = random ephemeral)
router_mac: "aa:bb:cc:dd:ee:ff"
ipv6:
addr: "[2001:db8::1]:0"
router_mac: "aa:bb:cc:dd:ee:ff"
pcap:
sockbuf: 4194304 # Ring buffer size (bytes)
tcp:
local_flag: ["PA"] # TCP flags for outgoing packets
remote_flag: ["PA"] # TCP flags for incoming packets
listen: # Server only
addr: "0.0.0.0:9999" # Listen address
server: # Client only
addr: "10.0.0.1:9999" # Server address
socks5: # Client only
- listen: "127.0.0.1:1080"
username: "" # Optional auth
password: ""
forward: # Client only
- listen: "127.0.0.1:8080"
target: "192.168.2.1:80"
protocol: "tcp" # "tcp" or "udp"
transport:
protocol: "kcp"
conn: 1 # Parallel KCP connections (1-256)
tcpbuf: 8192 # Application TCP buffer (bytes)
udpbuf: 4096 # Application UDP buffer (bytes)
kcp:
mode: "fast" # normal, fast, fast2, fast3, manual
nodelay: 0 # Manual mode only
interval: 30 # Manual mode only (ms)
resend: 2 # Manual mode only
nocongestion: 1 # Manual mode only
wdelay: true # Manual mode only
acknodelay: false # Manual mode only
mtu: 1350 # Max transmission unit (50-1500)
rcvwnd: 512 # Receive window (1-32768)
sndwnd: 512 # Send window (1-32768)
dshard: 0 # FEC data shards (0 = disabled)
pshard: 0 # FEC parity shards
block: "aes" # Encryption algorithm
key: "your-secret-key" # Encryption key
smuxbuf: 4194304 # SMUX receive buffer (bytes)
streambuf: 2097152 # Per-stream buffer (bytes)Network Layer
Interface Selection (network.interface)
network:
interface: "eth0" # Linux
# interface: "en0" # macOS
# interface: "Ethernet" # Windows (name only, GUID required)
guid: "\\Device\\NPF_{...}" # Windows NPF GUIDPlatform Notes
- Linux: Use interface name (eth0, ens33, enp3s0)
- macOS: Use interface name (en0, en1)
- Windows: Must provide NPF GUID from paqet iface command
Finding Interface
# Linux/macOS
ip link show # or: ifconfig
# Windows
paqet iface # Lists interfaces with GUIDsIP Address Configuration
ipv4:
addr: "192.168.1.100:0" # IP:Port
router_mac: "aa:bb:cc:dd:ee:ff" # Gateway MACRouter MAC Address
Required for packet crafting. Packets are injected directly to the gateway.
# Linux
ip neigh show | grep $(ip route | grep default | awk '{print $3}')
# macOS
arp -a $(netstat -rn | grep default | head -1 | awk '{print $2}')
# Windows
arp -a <gateway_ip>TCP Flags (network.tcp)
tcp:
local_flag: ["PA"] # Flags for client→server packets
remote_flag: ["PA"] # Flags for server→client packetsAvailable Flags
| Flag | Name | Description |
|---|---|---|
| S | SYN | Connection setup |
| A | ACK | Acknowledgment |
| P | PSH | Push (deliver immediately) |
| F | FIN | Connection finish |
| R | RST | Connection reset |
| U | URG | Urgent data |
| E | ECE | ECN echo |
| C | CWR | Congestion window reduced |
| N | NS | ECN nonce |
Common Combinations
- ["PA"] - Push+Ack (standard data, default)
- ["A"] - Ack only (lower overhead)
- ["SA"] - SYN+Ack (mimic handshake response)
- ["PA", "A", "PA"] - Cycle through patterns (evasion)
PCAP Buffer (network.pcap.sockbuf)
pcap:
sockbuf: 4194304 # 4MB (client default)
# sockbuf: 8388608 # 8MB (server default)Sizing Guide
| Throughput | Recommended Size |
|---|---|
| <10 Mbps | 4 MB |
| 10-50 Mbps | 8 MB |
| 50-100 Mbps | 16 MB |
| 100+ Mbps | 32-64 MB |
KCP Protocol
KCP is a fast and reliable ARQ protocol implemented over UDP.
Mode Presets (transport.kcp.mode)
kcp:
mode: "fast" # normal, fast, fast2, fast3, manualPreset Values
| Mode | nodelay | interval | resend | nocongestion | wdelay | acknodelay | Use Case |
|---|---|---|---|---|---|---|---|
| normal | 0 | 40ms | 2 | 1 | true | false | TCP-like, conservative |
| fast | 0 | 30ms | 2 | 1 | true | false | Balanced (default) |
| fast2 | 1 | 20ms | 2 | 1 | false | true | Low latency |
| fast3 | 1 | 10ms | 2 | 1 | false | true | Minimal latency |
| manual | custom | custom | custom | custom | custom | custom | Full control |
Important: When using mode presets (normal, fast, fast2, fast3), all manual mode parameters are ignored. Only use mode: "manual" when you need custom values.
Manual Mode Parameters
nodelay
nodelay: 1 # 0 or 1- 0: Conservative, TCP-like retransmission behavior
- 1: Aggressive, immediate retransmission on loss detection
interval
interval: 10 # 10-5000 millisecondsKCP's internal update interval. Controls ACK generation and loss detection frequency. Rule of thumb: interval ≈ RTT / 2
resend
resend: 2 # 0, 1, or 2- 0: Disable fast retransmit, wait for timeout
- 1: Retransmit after 1 duplicate ACK (very aggressive)
- 2: Retransmit after 2 duplicate ACKs (balanced)
nocongestion
nocongestion: 1 # 0 or 1- 0: Enable congestion control (fair sharing, slower ramp-up)
- 1: Disable congestion control (max throughput, may cause loss)
wdelay
wdelay: false # true or false- false: Flush immediately when data available (lower latency, more syscalls)
- true: Batch writes until next interval tick (higher throughput, added latency)
acknodelay
acknodelay: true # true or false- true: Send ACK immediately on receive (faster loss detection)
- false: Batch ACKs with data (bandwidth efficient)
MTU (transport.kcp.mtu)
mtu: 1350 # 50-1500 bytesCommon Values
| Value | Description |
|---|---|
| 1472 | Maximum safe UDP over Ethernet (1500 - 20 IP - 8 UDP) |
| 1400 | Safe for most internet paths |
| 1350 | Default, accounts for encryption overhead |
| 1250 | Conservative, works with VPN overhead |
| 512 | Maximum compatibility, gaming |
Window Sizes (rcvwnd, sndwnd)
rcvwnd: 512 # 1-32768
sndwnd: 512 # 1-32768Bandwidth-Delay Product
| Bandwidth | RTT | Min Window |
|---|---|---|
| 10 Mbps | 50ms | ~47 |
| 100 Mbps | 50ms | ~463 |
| 1 Gbps | 10ms | ~926 |
Forward Error Correction (dshard, pshard)
dshard: 0 # Data shards (0 = disabled, default)
pshard: 0 # Parity shardsWhen enabled, uses Reed-Solomon erasure coding. FEC is disabled by default.
When to use
- Lossy networks (>5% packet loss)
- High-latency links where retransmission is expensive
- Real-time applications (gaming, VoIP)
Buffers & Memory
Transport Buffers
transport:
tcpbuf: 8192 # Min: 4096 bytes
udpbuf: 4096 # Min: 2048 bytesSMUX Buffers
kcp:
smuxbuf: 4194304 # 4MB default
streambuf: 2097152 # 2MB defaultsmuxbuf: Total buffer for all streams in a session. Minimum: 1024 bytes. streambuf: Per-stream buffer. Minimum: 1024 bytes.
Memory Calculation
Per connection (approximate): Total = pcap_sockbuf + smuxbuf + (streambuf × concurrent_streams)
Example server (100 connections, 10 streams each):
PCAP: 8MB
SMUX: 4MB × 100 = 400MB
Streams: 2MB × 10 × 100 = 2GB
Total: ~2.4GBEncryption
Algorithms (transport.kcp.block)
kcp:
block: "aes" # Encryption algorithm
key: "your-key" # Any length (derived via PBKDF2)Available Algorithms
| Algorithm | Key Size | Speed | Notes |
|---|---|---|---|
| null | - | Fastest | No encryption, no authentication |
| none | - | Fastest | No encryption, has authentication |
| xor | 32 | Very fast | Weak, use only for testing |
| aes-128-gcm | 16 | Fast | Recommended - AEAD |
| aes-128 | 16 | Fast | Hardware accelerated |
| aes / aes-256 | 32 | Medium | Full 256-bit key |
| aes-192 | 24 | Medium | - |
| salsa20 | 32 | Fast | Stream cipher |
| sm4 | 16 | Fast | Chinese standard |
| blowfish | 32 | Medium | Legacy |
| twofish | 32 | Medium | AES alternative |
| cast5 | 16 | Medium | Legacy |
| 3des | 24 | Slow | Avoid |
| tea | 16 | Fast | Weak |
| xtea | 16 | Fast | Improved TEA |
Key Derivation
Keys are derived using PBKDF2 with salt "paqet" and 100,000 iterations. Recommendation: Use aes-128-gcm for best security/performance balance.
Key Generation
./paqet secretConnection Multiplexing
Multiple Connections (transport.conn)
transport:
conn: 1 # 1-256 parallel KCP connectionsBenefits
- Better bandwidth utilization
- Reduced head-of-line blocking
- Load balancing across connections
Guidelines
| Bandwidth | Recommended conn |
|---|---|
| <10 Mbps | 1 |
| 10-50 Mbps | 2-4 |
| 50-100 Mbps | 4-8 |
| 100-500 Mbps | 8-16 |
| 500+ Mbps | 16-32 |
OS Tuning
Linux Kernel Parameters
Critical: You must set both the default and max values. Applications use the default buffer size unless they explicitly request larger buffers.
# Socket buffer limits (max values)
sysctl -w net.core.rmem_max=268435456
sysctl -w net.core.wmem_max=268435456
# Socket buffer defaults (IMPORTANT: applications use these!)
sysctl -w net.core.rmem_default=16777216
sysctl -w net.core.wmem_default=16777216
# TCP buffer sizes: min default max
# The middle value is the DEFAULT - must be large enough!
sysctl -w net.ipv4.tcp_rmem="4096 8388608 268435456"
sysctl -w net.ipv4.tcp_wmem="4096 8388608 268435456"
# UDP buffers (critical for KCP)
sysctl -w net.core.netdev_max_backlog=65536
# Connection tracking (if using iptables)
sysctl -w net.netfilter.nf_conntrack_max=1000000Make persistent across reboots
cat >> /etc/sysctl.conf << 'EOF'
net.core.rmem_max=268435456
net.core.wmem_max=268435456
net.core.rmem_default=16777216
net.core.wmem_default=16777216
net.ipv4.tcp_rmem=4096 8388608 268435456
net.ipv4.tcp_wmem=4096 8388608 268435456
net.core.netdev_max_backlog=65536
EOF
sysctl -pNIC Optimization
# Enable offloading
ethtool -K eth0 tso on gso on gro on
# Ring buffers
ethtool -G eth0 rx 4096 tx 4096File Descriptors
ulimit -n 65535CPU Affinity
# Pin to specific cores
sudo taskset -c 0,1 ./paqet run -c config.yaml
# NUMA binding
sudo numactl --cpunodebind=0 --membind=0 ./paqet run -c config.yamlFirewall Setup
Required iptables Rules (Server)
Critical: These rules are mandatory for proper operation.
#!/bin/bash
PORT=9999
# Bypass connection tracking
iptables -t raw -A PREROUTING -p tcp --dport $PORT -j NOTRACK
iptables -t raw -A OUTPUT -p tcp --sport $PORT -j NOTRACK
# Drop kernel RST packets (kernel sends RST for packets it doesn't recognize)
iptables -t mangle -A OUTPUT -p tcp --sport $PORT --tcp-flags RST RST -j DROPWhy needed
- NOTRACK: Prevents conntrack from interfering with raw packets
- DROP RST: Kernel sends RST for TCP packets it didn't initiate; without this, connections die
nftables Alternative
#!/usr/sbin/nft -f
table inet paqet {
chain prerouting {
type filter hook prerouting priority raw; policy accept;
tcp dport 9999 notrack
}
chain output {
type filter hook output priority mangle; policy accept;
tcp sport 9999 tcp flags rst drop
}
}Monitoring & Debugging
Log Levels (log.level)
log:
level: "none" # none, debug, info, warn, error, fatal| Level | Value | Use Case |
|---|---|---|
| none | -1 | Production, best performance |
| debug | 0 | Troubleshooting, verbose |
| info | 1 | Normal operation |
| warn | 2 | Warnings only |
| error | 3 | Errors only |
| fatal | 4 | Fatal errors only |
Performance Impact: Debug logging can reduce throughput 10-20%. Use none in production.
Diagnostic Commands
ping - Test Connectivity
sudo ./paqet ping -c client.yamlTests: Network reachability, KCP handshake, encryption/decryption, round-trip time.
dump - Packet Capture
sudo ./paqet dump -p 9999 # Capture on port
sudo ./paqet dump -i eth0 -p 9999 # Specific interface
sudo ./paqet dump -p 9999 -v # Verboseiface - List Interfaces
./paqet iface # Lists interfaces with GUIDs (Windows)secret - Generate Key
./paqet secret # Generates secure random keyTroubleshooting
"Message too long" Error
Cause: KCP MTU exceeds the path MTU between client and server.
# Test different sizes until one works
ping -M do -s 1472 <server_ip> # Test 1500 byte path
ping -M do -s 1372 <server_ip> # Test 1400 byte pathSolution: Set KCP MTU to path_mtu - 100 (safe margin for encryption overhead). Both client and server must use the same MTU.
Connection Timeout / Slow Ramp-up
Cause: nocongestion: 0 enables TCP-like congestion control, which can cause slow connection establishment on variable bandwidth links.
Solution: Use nocongestion: 1 (default) for variable bandwidth environments.
Config Not Applying
Cause: Wrong YAML key names. Invalid keys are silently ignored.
| Wrong Key | Correct Key |
|---|---|
| snd_wnd | sndwnd |
| rcv_wnd | rcvwnd |
| nc | nocongestion |
| data_shard | dshard |
| parity_shard | pshard |
"No buffer space available" Error
Cause: OS socket buffer limits are too small, causing packet loss when paqet tries to write to local services.
# Check current socket buffer sizes
ss -tnm | grep -A2 ":443" | head -10
# Look for rbXXXXX (receive buffer) - if it's small (e.g., rb87380 = 87KB), that's the problem
# Should be rb8388608 (8MB) or largerSolution: Apply the OS tuning settings and restart paqet AND the local service to create new sockets with larger buffers.
Scenario Configurations
role: "client"
log:
level: "none"
network:
interface: "eth0"
ipv4:
addr: "192.168.1.100:0"
router_mac: "aa:bb:cc:dd:ee:ff"
pcap:
sockbuf: 4194304
server:
addr: "10.0.0.1:9999"
transport:
protocol: "kcp"
conn: 1
kcp:
mode: "fast3"
mtu: 512
rcvwnd: 128
sndwnd: 128
block: "aes-128-gcm"
key: "your-key"
smuxbuf: 1048576
streambuf: 524288Quick Diagnostic Checklist
1. Check for Buffer Errors
journalctl -u paqet --since "1 hour ago" | grep -i "buffer"
# If output shows "No buffer space available" → Apply OS tuning2. Verify Socket Buffer Sizes
ss -tnm | grep -A2 ":443"
# Should show: rb8388608 or larger3. Check Retransmissions
cat /proc/net/snmp | grep Tcp
# RetransSegs should be much smaller than OutSegs4. Verify Config Keys
grep -E "snd_wnd|rcv_wnd|data_shard|parity_shard|^nc:" /opt/paqet/config.yaml
# If any match → fix the key names5. Check MTU
# Find path MTU
ping -M do -s 1372 <server_ip> # Test 1400 byte path
# Verify both configs use same MTU
grep "mtu:" /opt/paqet/config.yamlCommon Issues Summary
| Symptom | Likely Cause | Solution |
|---|---|---|
| "No buffer space available" errors | OS socket buffers too small | Apply OS tuning, restart services |
| 60%+ overhead | Buffer overflow causing retransmissions | Apply OS tuning with 8MB default |
| Config values not applying | Wrong YAML keys | Fix key names (see Troubleshooting) |
| "Message too long" errors | MTU too large | Reduce MTU to match path MTU |
| Slow connection ramp-up | nocongestion: 0 on variable BW | Use default or mode presets |