Skip to content
#

supervised-fine-tuning

Here are 22 public repositories matching this topic...

🎯 Fine-tuning LLMs using LlamaFactory for financial intent understanding | Evaluating open-source models on OpenFinData benchmark | Full implementation with multiple models (Qwen2.5/ChatGLM3/Baichuan2/Llama3)

  • Updated Jan 16, 2025
  • Jupyter Notebook

End-to-end Supervised Fine-Tuning (SFT) pipeline for TinyLlama-1.1B-Chat, specialized in trademark similarity risk assessment using heuristic-labeled SFT data, CPU-only LoRA training, adapter validation, full-weight merge, GGUF export, quantization (Q4_K_M), and local inference deployment via llama.cpp.

  • Updated Feb 18, 2026
  • Python

Improve this page

Add a description, image, and links to the supervised-fine-tuning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the supervised-fine-tuning topic, visit your repo's landing page and select "manage topics."

Learn more