PAQET

Packet-Level Proxy with KCP Protocol

Bypass the OS TCP/IP stack with raw sockets and libpcap. Optimized for gaming, VPN services, and high-performance networking.

┌─────────────────────────────────────────────────────────────┐
│  CLIENT                              SERVER                 │
│  ───────                              ───────               │
│                                                             │
│  Application ──► SOCKS5/Forward ──► SMUX ──► KCP ──► pcap  │
│                                                             │
│              Raw TCP Packets (crafted)                      │
│           ◄────────────────────────────────────────────►    │
│                                                             │
│  pcap ◄── KCP ◄── SMUX ◄── SOCKS5/Forward ◄── Application  │
└─────────────────────────────────────────────────────────────┘

Architecture Overview

paqet is a packet-level proxy that bypasses the OS TCP/IP stack using raw sockets and libpcap.

┌─────────────────────────────────────────────────────────────────┐
│  CLIENT                              SERVER                      │
│  ───────                              ───────                    │
│                                                                   │
│  Application ──► SOCKS5/Forward ──► SMUX ──► KCP ──► pcap      │
│                                                                   │
│                       Raw TCP Packets (crafted)                  │
│                    ◄────────────────────────────────────────►   │
│                                                                   │
│  pcap ◄── KCP ◄── SMUX ◄── SOCKS5/Forward ◄── Application      │
└─────────────────────────────────────────────────────────────────┘

Key Components

  • pcap: Captures/injects raw Ethernet frames via libpcap
  • Packet Crafting: Builds TCP packets with forged headers
  • KCP: Reliable UDP protocol with ARQ and congestion control
  • SMUX: Multiplexes multiple streams over one KCP connection
  • SOCKS5/Forward: Application-layer proxy interfaces

Configuration Reference

Complete Configuration Structure

yaml
role: "client"                    # "client" or "server"

log:
  level: "none"                   # none, debug, info, warn, error, fatal

network:
  interface: "eth0"               # Network interface name
  guid: ""                        # Windows NPF GUID (Windows only)
  ipv4:
    addr: "192.168.1.100:0"       # IP:Port (port 0 = random ephemeral)
    router_mac: "aa:bb:cc:dd:ee:ff"
  ipv6:
    addr: "[2001:db8::1]:0"
    router_mac: "aa:bb:cc:dd:ee:ff"
  pcap:
    sockbuf: 4194304              # Ring buffer size (bytes)
  tcp:
    local_flag: ["PA"]            # TCP flags for outgoing packets
    remote_flag: ["PA"]           # TCP flags for incoming packets

listen:                           # Server only
  addr: "0.0.0.0:9999"            # Listen address

server:                           # Client only
  addr: "10.0.0.1:9999"           # Server address

socks5:                           # Client only
  - listen: "127.0.0.1:1080"
    username: ""                  # Optional auth
    password: ""

forward:                          # Client only
  - listen: "127.0.0.1:8080"
    target: "192.168.2.1:80"
    protocol: "tcp"               # "tcp" or "udp"

transport:
  protocol: "kcp"
  conn: 1                         # Parallel KCP connections (1-256)
  tcpbuf: 8192                    # Application TCP buffer (bytes)
  udpbuf: 4096                    # Application UDP buffer (bytes)
  kcp:
    mode: "fast"                  # normal, fast, fast2, fast3, manual
    nodelay: 0                    # Manual mode only
    interval: 30                  # Manual mode only (ms)
    resend: 2                     # Manual mode only
    nocongestion: 1               # Manual mode only
    wdelay: true                  # Manual mode only
    acknodelay: false             # Manual mode only
    mtu: 1350                     # Max transmission unit (50-1500)
    rcvwnd: 512                   # Receive window (1-32768)
    sndwnd: 512                   # Send window (1-32768)
    dshard: 0                     # FEC data shards (0 = disabled)
    pshard: 0                     # FEC parity shards
    block: "aes"                  # Encryption algorithm
    key: "your-secret-key"        # Encryption key
    smuxbuf: 4194304              # SMUX receive buffer (bytes)
    streambuf: 2097152            # Per-stream buffer (bytes)

Network Layer

Interface Selection (network.interface)

yaml
network:
  interface: "eth0"        # Linux
  # interface: "en0"       # macOS
  # interface: "Ethernet"  # Windows (name only, GUID required)
  guid: "\\Device\\NPF_{...}" # Windows NPF GUID

Platform Notes

  • Linux: Use interface name (eth0, ens33, enp3s0)
  • macOS: Use interface name (en0, en1)
  • Windows: Must provide NPF GUID from paqet iface command

Finding Interface

bash
# Linux/macOS
ip link show                    # or: ifconfig

# Windows
paqet iface                     # Lists interfaces with GUIDs

IP Address Configuration

yaml
ipv4:
  addr: "192.168.1.100:0"        # IP:Port
  router_mac: "aa:bb:cc:dd:ee:ff" # Gateway MAC

Router MAC Address

Required for packet crafting. Packets are injected directly to the gateway.

bash
# Linux
ip neigh show | grep $(ip route | grep default | awk '{print $3}')

# macOS
arp -a $(netstat -rn | grep default | head -1 | awk '{print $2}')

# Windows
arp -a <gateway_ip>

TCP Flags (network.tcp)

yaml
tcp:
  local_flag: ["PA"]    # Flags for client→server packets
  remote_flag: ["PA"]   # Flags for server→client packets

Available Flags

FlagNameDescription
SSYNConnection setup
AACKAcknowledgment
PPSHPush (deliver immediately)
FFINConnection finish
RRSTConnection reset
UURGUrgent data
EECEECN echo
CCWRCongestion window reduced
NNSECN nonce

Common Combinations

  • ["PA"] - Push+Ack (standard data, default)
  • ["A"] - Ack only (lower overhead)
  • ["SA"] - SYN+Ack (mimic handshake response)
  • ["PA", "A", "PA"] - Cycle through patterns (evasion)

PCAP Buffer (network.pcap.sockbuf)

yaml
pcap:
  sockbuf: 4194304    # 4MB (client default)
  # sockbuf: 8388608  # 8MB (server default)

Sizing Guide

ThroughputRecommended Size
<10 Mbps4 MB
10-50 Mbps8 MB
50-100 Mbps16 MB
100+ Mbps32-64 MB

KCP Protocol

KCP is a fast and reliable ARQ protocol implemented over UDP.

Mode Presets (transport.kcp.mode)

yaml
kcp:
  mode: "fast"    # normal, fast, fast2, fast3, manual

Preset Values

ModenodelayintervalresendnocongestionwdelayacknodelayUse Case
normal040ms21truefalseTCP-like, conservative
fast030ms21truefalseBalanced (default)
fast2120ms21falsetrueLow latency
fast3110ms21falsetrueMinimal latency
manualcustomcustomcustomcustomcustomcustomFull control

Important: When using mode presets (normal, fast, fast2, fast3), all manual mode parameters are ignored. Only use mode: "manual" when you need custom values.

Manual Mode Parameters

nodelay

yaml
nodelay: 1    # 0 or 1
  • 0: Conservative, TCP-like retransmission behavior
  • 1: Aggressive, immediate retransmission on loss detection

interval

yaml
interval: 10    # 10-5000 milliseconds

KCP's internal update interval. Controls ACK generation and loss detection frequency. Rule of thumb: interval ≈ RTT / 2

resend

yaml
resend: 2    # 0, 1, or 2
  • 0: Disable fast retransmit, wait for timeout
  • 1: Retransmit after 1 duplicate ACK (very aggressive)
  • 2: Retransmit after 2 duplicate ACKs (balanced)

nocongestion

yaml
nocongestion: 1    # 0 or 1
  • 0: Enable congestion control (fair sharing, slower ramp-up)
  • 1: Disable congestion control (max throughput, may cause loss)

wdelay

yaml
wdelay: false    # true or false
  • false: Flush immediately when data available (lower latency, more syscalls)
  • true: Batch writes until next interval tick (higher throughput, added latency)

acknodelay

yaml
acknodelay: true    # true or false
  • true: Send ACK immediately on receive (faster loss detection)
  • false: Batch ACKs with data (bandwidth efficient)

MTU (transport.kcp.mtu)

yaml
mtu: 1350    # 50-1500 bytes

Common Values

ValueDescription
1472Maximum safe UDP over Ethernet (1500 - 20 IP - 8 UDP)
1400Safe for most internet paths
1350Default, accounts for encryption overhead
1250Conservative, works with VPN overhead
512Maximum compatibility, gaming

Window Sizes (rcvwnd, sndwnd)

yaml
rcvwnd: 512    # 1-32768
sndwnd: 512    # 1-32768

Bandwidth-Delay Product

BandwidthRTTMin Window
10 Mbps50ms~47
100 Mbps50ms~463
1 Gbps10ms~926

Forward Error Correction (dshard, pshard)

yaml
dshard: 0    # Data shards (0 = disabled, default)
pshard: 0    # Parity shards

When enabled, uses Reed-Solomon erasure coding. FEC is disabled by default.

When to use

  • Lossy networks (>5% packet loss)
  • High-latency links where retransmission is expensive
  • Real-time applications (gaming, VoIP)

Buffers & Memory

Transport Buffers

yaml
transport:
  tcpbuf: 8192    # Min: 4096 bytes
  udpbuf: 4096    # Min: 2048 bytes

SMUX Buffers

yaml
kcp:
  smuxbuf: 4194304      # 4MB default
  streambuf: 2097152    # 2MB default

smuxbuf: Total buffer for all streams in a session. Minimum: 1024 bytes. streambuf: Per-stream buffer. Minimum: 1024 bytes.

Memory Calculation

Per connection (approximate): Total = pcap_sockbuf + smuxbuf + (streambuf × concurrent_streams)

text
Example server (100 connections, 10 streams each):
PCAP: 8MB
SMUX: 4MB × 100 = 400MB
Streams: 2MB × 10 × 100 = 2GB
Total: ~2.4GB

Encryption

Algorithms (transport.kcp.block)

yaml
kcp:
  block: "aes"        # Encryption algorithm
  key: "your-key"     # Any length (derived via PBKDF2)

Available Algorithms

AlgorithmKey SizeSpeedNotes
null-FastestNo encryption, no authentication
none-FastestNo encryption, has authentication
xor32Very fastWeak, use only for testing
aes-128-gcm16FastRecommended - AEAD
aes-12816FastHardware accelerated
aes / aes-25632MediumFull 256-bit key
aes-19224Medium-
salsa2032FastStream cipher
sm416FastChinese standard
blowfish32MediumLegacy
twofish32MediumAES alternative
cast516MediumLegacy
3des24SlowAvoid
tea16FastWeak
xtea16FastImproved TEA

Key Derivation

Keys are derived using PBKDF2 with salt "paqet" and 100,000 iterations. Recommendation: Use aes-128-gcm for best security/performance balance.

Key Generation

bash
./paqet secret

Connection Multiplexing

Multiple Connections (transport.conn)

yaml
transport:
  conn: 1    # 1-256 parallel KCP connections

Benefits

  • Better bandwidth utilization
  • Reduced head-of-line blocking
  • Load balancing across connections

Guidelines

BandwidthRecommended conn
<10 Mbps1
10-50 Mbps2-4
50-100 Mbps4-8
100-500 Mbps8-16
500+ Mbps16-32

OS Tuning

Linux Kernel Parameters

Critical: You must set both the default and max values. Applications use the default buffer size unless they explicitly request larger buffers.

bash
# Socket buffer limits (max values)
sysctl -w net.core.rmem_max=268435456
sysctl -w net.core.wmem_max=268435456

# Socket buffer defaults (IMPORTANT: applications use these!)
sysctl -w net.core.rmem_default=16777216
sysctl -w net.core.wmem_default=16777216

# TCP buffer sizes: min default max
# The middle value is the DEFAULT - must be large enough!
sysctl -w net.ipv4.tcp_rmem="4096 8388608 268435456"
sysctl -w net.ipv4.tcp_wmem="4096 8388608 268435456"

# UDP buffers (critical for KCP)
sysctl -w net.core.netdev_max_backlog=65536

# Connection tracking (if using iptables)
sysctl -w net.netfilter.nf_conntrack_max=1000000

Make persistent across reboots

bash
cat >> /etc/sysctl.conf << 'EOF'
net.core.rmem_max=268435456
net.core.wmem_max=268435456
net.core.rmem_default=16777216
net.core.wmem_default=16777216
net.ipv4.tcp_rmem=4096 8388608 268435456
net.ipv4.tcp_wmem=4096 8388608 268435456
net.core.netdev_max_backlog=65536
EOF

sysctl -p

NIC Optimization

bash
# Enable offloading
ethtool -K eth0 tso on gso on gro on

# Ring buffers
ethtool -G eth0 rx 4096 tx 4096

File Descriptors

bash
ulimit -n 65535

CPU Affinity

bash
# Pin to specific cores
sudo taskset -c 0,1 ./paqet run -c config.yaml

# NUMA binding
sudo numactl --cpunodebind=0 --membind=0 ./paqet run -c config.yaml

Firewall Setup

Required iptables Rules (Server)

Critical: These rules are mandatory for proper operation.

bash
#!/bin/bash
PORT=9999

# Bypass connection tracking
iptables -t raw -A PREROUTING -p tcp --dport $PORT -j NOTRACK
iptables -t raw -A OUTPUT -p tcp --sport $PORT -j NOTRACK

# Drop kernel RST packets (kernel sends RST for packets it doesn't recognize)
iptables -t mangle -A OUTPUT -p tcp --sport $PORT --tcp-flags RST RST -j DROP

Why needed

  • NOTRACK: Prevents conntrack from interfering with raw packets
  • DROP RST: Kernel sends RST for TCP packets it didn't initiate; without this, connections die

nftables Alternative

bash
#!/usr/sbin/nft -f

table inet paqet {
    chain prerouting {
        type filter hook prerouting priority raw; policy accept;
        tcp dport 9999 notrack
    }
    chain output {
        type filter hook output priority mangle; policy accept;
        tcp sport 9999 tcp flags rst drop
    }
}

Monitoring & Debugging

Log Levels (log.level)

yaml
log:
  level: "none"    # none, debug, info, warn, error, fatal
LevelValueUse Case
none-1Production, best performance
debug0Troubleshooting, verbose
info1Normal operation
warn2Warnings only
error3Errors only
fatal4Fatal errors only

Performance Impact: Debug logging can reduce throughput 10-20%. Use none in production.

Diagnostic Commands

ping - Test Connectivity

bash
sudo ./paqet ping -c client.yaml

Tests: Network reachability, KCP handshake, encryption/decryption, round-trip time.

dump - Packet Capture

bash
sudo ./paqet dump -p 9999           # Capture on port
sudo ./paqet dump -i eth0 -p 9999   # Specific interface
sudo ./paqet dump -p 9999 -v        # Verbose

iface - List Interfaces

bash
./paqet iface    # Lists interfaces with GUIDs (Windows)

secret - Generate Key

bash
./paqet secret    # Generates secure random key

Troubleshooting

"Message too long" Error

Cause: KCP MTU exceeds the path MTU between client and server.

bash
# Test different sizes until one works
ping -M do -s 1472 <server_ip>   # Test 1500 byte path
ping -M do -s 1372 <server_ip>   # Test 1400 byte path

Solution: Set KCP MTU to path_mtu - 100 (safe margin for encryption overhead). Both client and server must use the same MTU.

Connection Timeout / Slow Ramp-up

Cause: nocongestion: 0 enables TCP-like congestion control, which can cause slow connection establishment on variable bandwidth links.

Solution: Use nocongestion: 1 (default) for variable bandwidth environments.

Config Not Applying

Cause: Wrong YAML key names. Invalid keys are silently ignored.

Wrong KeyCorrect Key
snd_wndsndwnd
rcv_wndrcvwnd
ncnocongestion
data_sharddshard
parity_shardpshard

"No buffer space available" Error

Cause: OS socket buffer limits are too small, causing packet loss when paqet tries to write to local services.

bash
# Check current socket buffer sizes
ss -tnm | grep -A2 ":443" | head -10

# Look for rbXXXXX (receive buffer) - if it's small (e.g., rb87380 = 87KB), that's the problem
# Should be rb8388608 (8MB) or larger

Solution: Apply the OS tuning settings and restart paqet AND the local service to create new sockets with larger buffers.

Scenario Configurations

yaml
role: "client"
log:
  level: "none"

network:
  interface: "eth0"
  ipv4:
    addr: "192.168.1.100:0"
    router_mac: "aa:bb:cc:dd:ee:ff"
  pcap:
    sockbuf: 4194304

server:
  addr: "10.0.0.1:9999"

transport:
  protocol: "kcp"
  conn: 1
  kcp:
    mode: "fast3"
    mtu: 512
    rcvwnd: 128
    sndwnd: 128
    block: "aes-128-gcm"
    key: "your-key"
    smuxbuf: 1048576
    streambuf: 524288

Quick Diagnostic Checklist

1. Check for Buffer Errors

bash
journalctl -u paqet --since "1 hour ago" | grep -i "buffer"
# If output shows "No buffer space available" → Apply OS tuning

2. Verify Socket Buffer Sizes

bash
ss -tnm | grep -A2 ":443"
# Should show: rb8388608 or larger

3. Check Retransmissions

bash
cat /proc/net/snmp | grep Tcp
# RetransSegs should be much smaller than OutSegs

4. Verify Config Keys

bash
grep -E "snd_wnd|rcv_wnd|data_shard|parity_shard|^nc:" /opt/paqet/config.yaml
# If any match → fix the key names

5. Check MTU

bash
# Find path MTU
ping -M do -s 1372 <server_ip>  # Test 1400 byte path

# Verify both configs use same MTU
grep "mtu:" /opt/paqet/config.yaml

Common Issues Summary

SymptomLikely CauseSolution
"No buffer space available" errorsOS socket buffers too smallApply OS tuning, restart services
60%+ overheadBuffer overflow causing retransmissionsApply OS tuning with 8MB default
Config values not applyingWrong YAML keysFix key names (see Troubleshooting)
"Message too long" errorsMTU too largeReduce MTU to match path MTU
Slow connection ramp-upnocongestion: 0 on variable BWUse default or mode presets