Independent research on frameworks and AI.
Generative Development Framework (GDF) is an independent research project. We publish whitepapers, books, and reproducible analyses — paired with the scripts and data behind every conclusion.
GDF.AI :: RESEARCH PROGRAM ACTIVEThree arms, one project
Generative Development Framework (GDF) is the parent project. It runs as three arms: research, publications, and a small labs effort that feeds the other two.
GDF Research Program
Long-form analyses of frameworks, AI systems, and distributed compute. Each piece is paired with the scripts and data behind every conclusion.
GDF Publications
Books and longer-form essays for engineering teams adopting AI. The Generative Development Framework series is published on Amazon and used as the reference for the Program.
GDF Labs
Prototypes, simulations, and experiments that feed the Research Program. When a Labs project produces useful tooling, we open-source it; when it produces a cautionary tale, we document the postmortem.
Whitepapers and notes
Long-form research from the firm — whitepapers, technical reports, and notes.
Toward LLM-Assisted Policy Enforcement at the Kernel Boundary
Runtime security controls for AI coding agents that execute system operations with user privileges. A hybrid architecture pairs kernel-level syscall interception with Claude Haiku 4.5 policy decisions over AWS Bedrock — pure-LLM verdicts time out on 56% of events at a 4s budget, while a deterministic fast path for unambiguous threats lands at 1.00 ms versus 3,617 ms. Evaluation across 1,247 events and 1,000 threat scenarios.
No Free Signal: A Negative Result for Substrate-Evolution Around Fixed LLMs in an Embodied Multi-Agent Population
Can evolutionary pressure on the communication substrate — rather than model weights — make a frozen LLM (Amazon Nova Lite) more adaptive inside a 25-creature multi-agent system? A seven-arm design with 140 controlled runs, including mute baselines, scrambled models, and noise emitters, finds the full treatment does not outperform any control on fitness. A behavioral analysis still detects emission-source effects on receiver responses, with methodological contributions around cadence-matched noise controls.
The Real Limits of Distributed LLM Training: An Architectural Postmortem
A federated, peer-to-peer LLM training network — its architecture, mechanisms, and the quantitative reasons centralized training still wins for frontier models. Reproducible Python scripts for bandwidth, straggler, convergence, cost, and poisoning analysis included.
Monotonic Proof-State Advancement for Distributed Workflow Verification
A method for verifying distributed workflows when no single observer can witness every step of a business process. Introduces a monotonic confirmation-level hierarchy that prevents proof-state regression, threshold-triggered evaluator dispatch that suppresses premature policy evaluation, and a boundary-capped proof resolution scheme that distinguishes architecturally unobservable evidence from true policy failures. Illustrated with a payment-settlement case study.
Books
The Generative Development Framework book series — methodology for engineering leaders adopting AI, and for full-stack engineers shipping with it.
Your Questions Answered
GDF is an independent research and publication firm. We study software development frameworks, AI systems, and distributed compute, and publish the results as whitepapers, books, and technical notes — each paired with the scripts and data behind the conclusions.
GDF is led by Sterling Morrison, author of the Generative Development Framework book series. We collaborate with engineers and researchers across industry on a project-by-project basis.
Three formats. Whitepapers — long-form analyses with code and data, like our postmortem on distributed LLM training. Books — methodology for software engineering teams adopting AI, available on Amazon. Notes — shorter pieces on specific architectural decisions or industry developments.
Two differences. First, our research is open by default — the methodology and code are public so anyone can reproduce the numbers. Second, we work the problem ourselves rather than aggregating vendor materials. When we say a system has a particular limitation, there is a script that demonstrates it.
Yes. We take on a small number of commissioned engagements each quarter — typically architecture reviews, framework evaluations, or postmortems on internal initiatives. Reach out to hello@gdf.ai with a short description of the question you want answered.
No. GDF is a research and publication firm. If a piece of research produces useful tooling, we release it as open source alongside the whitepaper, but we do not maintain commercial software.
Each whitepaper has a stable URL on gdf.ai and a publication date. Cite the title, author, date, and URL. If a paper has a DOI assigned, it will be listed on the publication page.
Yes. Every whitepaper that includes quantitative claims ships with a scripts directory containing the exact Python (or other) code used to generate the numbers, plus a README with the commands to re-run them. If you find a number you cannot reproduce, that is a bug we want to know about.
New whitepapers appear on this site as they are released. For collaboration inquiries, email hello@gdf.ai.
Commission research or collaborate.
If you are evaluating a framework, a compute architecture, or an AI strategy and want an independent assessment, we take on a small number of engagements each quarter.
Email hello@gdf.ai

