
Register Your GPU & Go Live
Download the node client, register your GPU permissionlessly on Base. Entry tier starts with just an RTX 4060.
Serve Compressed AI Models
Your GPU runs TurboQuant-compressed models that fit in consumer VRAM. A 48GB model runs in ~8GB. Zero accuracy loss.
Earn & Get Verified
Complete inference jobs, pass math-based verification challenges, earn ETH. Every response is proven correct.
The only network built for consumer GPUs. 6x less VRAM. Same results. Mathematically proven.
| Network | Min GPU | Verification | Consumer Accessible | Cost to Start |
|---|---|---|---|---|
| CompressNode | RTX 4060 (8GB) | Math-based proofs | Yes | $250 GPU + stake |
| Akash | RTX 3090+ (24GB) | TEE attestation | Limited | $1,500+ GPU |
| Render | RTX 3090+ (24GB) | Proof of Render | No | $1,500+ GPU |
| io.net | A100 / H100 | None | No | $10,000+ GPU |
| Bittensor | RTX 4090 (24GB) | Probabilistic scoring | No | $2,000+ GPU |
Your RTX 4060 can do what used to need an H100. TurboQuant's PolarQuant algorithm uses a deterministic polar coordinate transform with provable mathematical guarantees.
Requires H100 / A100
Runs on RTX 4060
Mathematically proven
Math-Based Verification
We don't just trust that nodes are honest. We prove it. TurboQuant compression follows strict mathematical laws. We periodically test every node with questions we already know the answers to. If a node's answer doesn't match the math, it gets deactivated and loses reputation. No TEEs. No probabilistic scoring. Pure mathematics.
Select your GPU tier to see supported models and estimated daily earnings.
Compatible GPUs
Supported Models
Estimated Daily Earnings
Got 8GB VRAM? You're in. Serve 7B-13B models and start earning.
Earnings depend on network demand, uptime, and node reputation score.
Join the network, register your GPU, and start earning ETH from verified AI inference from your consumer hardware.