Andrej Karpathy open-sources the autoresearch project, which automatically runs hundreds of LLM training experiments.

Gate News reports that on March 9, Eureka Labs founder and OpenAI co-founder Andrej Karpathy publicly released the open-source project autoresearch. Yesterday (March 8), he independently packaged the AI Agent auto-tuning workflow previously used in the LLM training project nanochat for developers. The project adopts a “human writes Markdown, AI writes code” design pattern: developers define research directions by writing a program.md file, and the AI Agent autonomously modifies the train.py code, which includes a complete GPT model, Muon + AdamW optimizer, and training loop (about 630 lines). Each experiment runs fixed for 5 minutes, with the evaluation metric being the bits per byte (val_bpb) on the validation set. Improvements that outperform the baseline are retained and submitted; otherwise, they are discarded. At this pace, approximately 12 experiments can be run per hour, completing about 100 overnight. Karpathy’s demonstration shows that out of 83 experiments, 15 produced effective improvements. The project requires only one NVIDIA GPU (tested on H100), depends on PyTorch and a few software packages, and is open-sourced under the MIT license. Currently, community branches for macOS and MLX adaptation have appeared.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments