cover

The Last Rank We Need? QDyLoRA's Vision for the Future of LLM Tuning

1 Jul 2025

QDyLoRA makes LLM fine-tuning efficient by dynamically finding the best rank, beating QLoRA and making large model adaptation more accessible.

cover

QDyLoRA in Action: Method, Benchmarks, and Why It Outperforms QLoRA

1 Jul 2025

QDyLoRA beats QLoRA by dynamically finding the optimal rank during quantized fine-tuning, proving more memory-efficient and effective on large LLMs.

cover

Beyond Static Ranks: The Power of Dynamic Quantization in LLM Fine-Tuning

1 Jul 2025

QDyLoRA enables efficient, multi-rank LLM fine-tuning in one session by combining quantization with dynamic adaptation, beating standard QLoRA

cover

More Than a Feeling: Visualizing Why Filter Atoms Outsmart LoRA in Fine-Tuning

1 Jul 2025

VTAB-1k results show our method beats LoRA in few-shot tasks, with Grad-CAM visuals confirming better spatial awareness and superior generalization.

cover

Tuning the Pixels, Not the Soul: How Filter Atoms Remake ConvNets

1 Jul 2025

This method fine-tunes ConvNets by adapting only filter atoms, preserving pre-trained knowledge for efficient, high-performance transfer learning.

cover

Keep the Channel, Change the Filter: A Smarter Way to Fine-Tune AI Models

1 Jul 2025

Fine-tune models by adapting only spatial filter atoms, preserving channel knowledge to prevent overfitting for superior, parameter-efficient results.

cover

Keep the Channel, Change the Filter: A Smarter Way to Fine-Tune AI Models

1 Jul 2025

Fine-tune models by adapting only spatial filter atoms, preserving channel knowledge to prevent overfitting for superior, parameter-efficient results.

cover

Open Models, Closed Gaps: How Fine-Tuning Impacts AI Model Toxicity

9 Jun 2025

This study explores how fine-tuning impacts toxicity in open-source language models, backed by reproducible experiments and open-access code.

cover

Why AI Models Get More Toxic After Community Fine-Tuning

9 Jun 2025

Fine-tuning AI models can unexpectedly increase toxicity—even with non-adversarial data—raising concerns for developers and policymakers alike.