Distributed AI Training, Powered by Community

GDF is an open-source network where anyone can contribute GPU power to train AI models, chat with domain specialists, and help build the future of decentralized machine learning.

NETWORK: ACTIVE
// Quick Start

Get Started in Seconds

01

Install the GDF CLI

$pip install gdf
02

Share your idle GPU to train models

$gdf contribute
03

Chat with domain-specialist models

$gdf chat

How GDF Works

A community-driven GPU network with built-in compression, peer-to-peer distribution, and an open model registry.

View on GitHub

Community GPU Training

Run `gdf contribute` and share your idle GPU cycles to train AI models. No ML expertise needed — just install and go.

Chat with Specialists

Use `gdf chat` to talk to domain-expert models. Auto-routing sends your question to the right specialist.

Delta Compression

Shrink 14GB model updates to 200–400MB using diff-based compression with fp16 quantization and zlib.

P2P Model Distribution

Torrent-style chunk sharing across peers. Hubs seed once, peers reshare — 1000x bandwidth savings at scale.

Hierarchical Merging

Regional hubs merge model updates locally before pushing upstream, keeping network traffic efficient and scalable.

Open Model Registry

A simple JSON registry on GitHub. Anyone can add models, run hubs, or contribute training data — fully open source.

// Architecture

How It Works

01

Contributor

Peers train on local GPUs

02

Hub

Hubs coordinate and distribute work

03

Merge

Deltas merged hierarchically

04

Global Model

Updated model shared to all

Books & Resources

Learn about the ideas and approaches behind GDF — from distributed training to generative development

// Support

Your Questions Answered

GDF (Generative Development Framework) is an open-source, community-powered GPU compute network. Members share idle GPU cycles to collaboratively train AI models, and anyone can chat with the resulting domain-specialist models using a simple CLI.

Install GDF with pip (`pip install gdf`), then run `gdf contribute`. The CLI connects to a hub, downloads the current model checkpoint, trains on assigned data, and uploads compressed weight updates — all automatically.

Any CUDA-capable NVIDIA GPU with at least 6GB of VRAM will work. Consumer cards like the RTX 3060 or above are ideal. GDF handles batching and memory management so you don't need to tune anything.

Instead of sending full model weights after training, GDF computes the diff between the base checkpoint and your updated weights, quantizes to fp16, and compresses with zlib. This shrinks a typical 14GB model update down to 200–400MB.

Hubs validate every incoming delta before merging. Outlier detection and norm-clipping reject suspicious updates. Training data stays on the hub side — contributors only receive model weights and data batches, never raw datasets from other peers.

Yes. Run `gdf hub start` to launch a hub. You'll need to register your hub and its supported models in the open JSON registry on GitHub. Hubs coordinate peers, distribute data batches, and merge incoming deltas.

Specialists are domain-expert models trained on focused datasets — for example, a medical Q&A specialist or a code-review specialist. When you run `gdf chat`, your message is automatically routed to the most relevant specialist.

You can run a hub, add models to the registry, contribute training datasets, improve the CLI, or help with documentation. Check out the GitHub repo at github.com/gdf-ai/gdf for open issues and contribution guidelines.

Ready to Contribute?

Every GPU helps. Install GDF, run one command, and start training models with the community.

Get Started on GitHub