Go ahead and axolotl questions https://docs.axolotl.ai
Find a file
2026-01-21 20:00:18 -05:00
.github upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
.runpod support for xformers wheels for torch 2.9 (#3308) 2025-12-11 11:56:40 -05:00
.vscode feat: enable trl's autounwrap (#1060) 2024-01-11 08:43:41 -05:00
cicd upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
deepspeed_configs revert renaming of deepspeed stage3 args that use auto (#2964) [skip ci] 2025-07-22 09:59:47 -04:00
devtools feat:add support dataset_num_processes (#3129) [skip ci] 2025-10-13 17:18:12 +07:00
docker upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
docs feat: Add GDPO Support (#3353) 2026-01-21 17:22:45 -05:00
examples upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
image Readme updates v2 (#2078) 2024-11-18 14:58:03 -05:00
scripts feat: add internvl3_5 (#3141) [skip-ci] 2025-12-25 18:07:59 +07:00
src feat: Add GDPO Support (#3353) 2026-01-21 17:22:45 -05:00
tests feat: Add GDPO Support (#3353) 2026-01-21 17:22:45 -05:00
.axolotl-complete.bash Autocomplete axolotl CLI (#2955) 2025-07-22 08:30:31 -04:00
.bandit Add ruff, remove black, isort, flake8, pylint (#3092) 2025-08-23 23:37:33 -04:00
.coderabbit.yaml Update .coderabbit.yaml (#3109) [skip ci] 2025-08-27 09:50:52 -04:00
.coveragerc adding codecov reporting (#2372) [skip ci] 2025-04-16 15:02:17 -07:00
.editorconfig WIP for axolotl trainer 2023-04-14 00:20:05 -04:00
.gitattributes make it work with pythia in the cloud 2023-04-14 07:24:55 -04:00
.gitignore Debug log, logging improvements (#3159) 2025-09-17 13:27:03 -04:00
.mypy.ini Liger Kernel integration (#1861) 2024-08-23 12:21:51 -04:00
.pre-commit-config.yaml chore: update pre-commit hooks (#3340) [skip ci] 2026-01-01 06:52:28 -05:00
_quarto.yml build examples readmes with quarto (#3046) 2025-12-25 19:17:25 +07:00
CITATION.cff SEO go brrr (#3153) [skip-ci] 2025-09-12 10:55:11 +01:00
CNAME feat: add CNAME (#2513) 2025-04-10 12:34:25 +07:00
codecov.yml allow 1% deviation for codecov (#3138) [skip ci] 2025-09-07 11:01:03 -04:00
docker-compose.yaml add git environment variables to compose: avoid checkout failure error 128 on build (#534) 2023-09-08 15:59:49 -04:00
FAQS.md Update FAQS.md 2023-06-10 23:36:14 -07:00
favicon.jpg update favicon (#2801) 2025-06-17 18:09:24 -04:00
index.qmd Revert test update to index.qmd (#2995) [skip ci] 2025-07-31 11:46:31 -04:00
LICENSE add apache 2.0 license 2023-07-21 09:49:29 -04:00
MANIFEST.in manage jinja templates as nicely formatted files (#2795) 2025-07-07 10:11:48 -04:00
pyproject.toml upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
README.md upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
requirements-dev.txt adding codecov reporting (#2372) [skip ci] 2025-04-16 15:02:17 -07:00
requirements-tests.txt Codecov fixes / improvements (#2549) 2025-04-23 10:33:30 -04:00
requirements.txt upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
setup.py upgrade vllm to v0.14.0 (#3345) 2026-01-21 20:00:18 -05:00
styles.css Autodoc generation with quartodoc (#2419) 2025-03-21 12:26:47 -04:00
VERSION Version dev (#3365) 2026-01-20 22:58:29 -05:00

Axolotl

A Free and Open Source LLM Fine-tuning Framework

GitHub License tests codecov Releases
contributors GitHub Repo stars
discord twitter google-colab
tests-nightly multigpu-semi-weekly tests

🎉 Latest Updates

Expand older updates
  • 2025/03: Axolotl has implemented Sequence Parallelism (SP) support. Read the blog and docs to learn how to scale your context length when fine-tuning.
  • 2025/06: Magistral with mistral-common tokenizer support has been added to Axolotl. See docs to start training your own Magistral models with Axolotl!
  • 2025/04: Llama 4 support has been added in Axolotl. See docs to start training your own Llama 4 models with Axolotl's linearized version!
  • 2025/03: (Beta) Fine-tuning Multimodal models is now supported in Axolotl. Check out the docs to fine-tune your own!
  • 2025/02: Axolotl has added LoRA optimizations to reduce memory usage and improve training speed for LoRA and QLoRA in single GPU and multi-GPU training (DDP and DeepSpeed). Jump into the docs to give it a try.
  • 2025/02: Axolotl has added GRPO support. Dive into our blog and GRPO example and have some fun!
  • 2025/01: Axolotl has added Reward Modelling / Process Reward Modelling fine-tuning support. See docs.

Overview

Axolotl is a free and open-source tool designed to streamline post-training and fine-tuning for the latest large language models (LLMs).

Features:

  • Multiple Model Support: Train various models like GPT-OSS, LLaMA, Mistral, Mixtral, Pythia, and many more models available on the Hugging Face Hub.
  • Multimodal Training: Fine-tune vision-language models (VLMs) including LLaMA-Vision, Qwen2-VL, Pixtral, LLaVA, SmolVLM2, and audio models like Voxtral with image, video, and audio support.
  • Training Methods: Full fine-tuning, LoRA, QLoRA, GPTQ, QAT, Preference Tuning (DPO, IPO, KTO, ORPO), RL (GRPO), and Reward Modelling (RM) / Process Reward Modelling (PRM).
  • Easy Configuration: Re-use a single YAML configuration file across the full fine-tuning pipeline: dataset preprocessing, training, evaluation, quantization, and inference.
  • Performance Optimizations: Multipacking, Flash Attention, Xformers, Flex Attention, Liger Kernel, Cut Cross Entropy, Sequence Parallelism (SP), LoRA optimizations, Multi-GPU training (FSDP1, FSDP2, DeepSpeed), Multi-node training (Torchrun, Ray), and many more!
  • Flexible Dataset Handling: Load from local, HuggingFace, and cloud (S3, Azure, GCP, OCI) datasets.
  • Cloud Ready: We ship Docker images and also PyPI packages for use on cloud platforms and local hardware.

🚀 Quick Start - LLM Fine-tuning in Minutes

Requirements:

  • NVIDIA GPU (Ampere or newer for bf16 and Flash Attention) or AMD GPU
  • Python 3.11
  • PyTorch ≥2.8.0

Google Colab

Open In Colab

Installation

Using pip

pip3 install -U packaging==26.0 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]

# Download example axolotl configs, deepspeed configs
axolotl fetch examples
axolotl fetch deepspeed_configs  # OPTIONAL

Using Docker

Installing with Docker can be less error prone than installing in your own environment.

docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest

Other installation approaches are described here.

Cloud Providers

Your First Fine-tune

# Fetch axolotl examples
axolotl fetch examples

# Or, specify a custom path
axolotl fetch examples --dest path/to/folder

# Train a model using LoRA
axolotl train examples/llama-3/lora-1b.yml

That's it! Check out our Getting Started Guide for a more detailed walkthrough.

📚 Documentation

🤝 Getting Help

🌟 Contributing

Contributions are welcome! Please see our Contributing Guide for details.

📈 Telemetry

Axolotl has opt-out telemetry that helps us understand how the project is being used and prioritize improvements. We collect basic system information, model types, and error rates—never personal data or file paths. Telemetry is enabled by default. To disable it, set AXOLOTL_DO_NOT_TRACK=1. For more details, see our telemetry documentation.

❤️ Sponsors

Interested in sponsoring? Contact us at wing@axolotl.ai

📝 Citing Axolotl

If you use Axolotl in your research or projects, please cite it as follows:

@software{axolotl,
  title = {Axolotl: Open Source LLM Post-Training},
  author = {{Axolotl maintainers and contributors}},
  url = {https://github.com/axolotl-ai-cloud/axolotl},
  license = {Apache-2.0},
  year = {2023}
}

📜 License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.