cover

Lessons from Testing AI Models on Global Damage Data

13 Aug 2025

This article’s results show Faster R-CNN with ResNet backbones beats YOLOv5 for road damage detection, with noted gains and failure case insights.

cover

Potholes, Pipelines, and Precision: Benchmarking Object Detectors for Global Road Safety

13 Aug 2025

This article compares YOLOv5 and Faster R-CNN for road damage detection, finding two-stage models with ResNet backbones yield top generalized results.

cover

Benchmarking Faster R-CNN and YOLOv5 for Global Road Damage Detection Across Countries

13 Aug 2025

This article details model choice, tuning, and dataset prep for road damage detection, comparing Faster R-CNN and YOLOv5 on a global multi-country dataset.

cover

When Experts Disagree, Let UNIPELT Decide

12 Aug 2025

This article reviews PELT and MoE methods, showing how UNIPELT unifies them to beat fine-tuning and single PELTs, with future work on multi-task use.

cover

Experimental Evaluation of UNIPELT: Robust Gains Over Fine-Tuning and Individual PELT Methods

12 Aug 2025

This article shows UNIPELT beats single PELTs and fine-tuning, excelling in low-resource setups and matching top results with more data.

cover

The Tuning Trifecta: UNIPELT’s Gated Symphony of BitFit, Adapter, and Prefix-Tuning

12 Aug 2025

This article outlines key PELT methods (BitFit, Adapter, Prefix-Tuning) and motivates UNIPELT, a gated hybrid combining them for robust LM tuning.

cover

Combining PELT Methods with Gating: How UNIPELT Delivers Robust LM Tuning Across Tasks

12 Aug 2025

This article presents UNIPELT, a unified PELT framework combining multiple tuning methods via gating to outperform fine-tuning and single PELTs.

cover

The Last Rank We Need? QDyLoRA's Vision for the Future of LLM Tuning

1 Jul 2025

QDyLoRA makes LLM fine-tuning efficient by dynamically finding the best rank, beating QLoRA and making large model adaptation more accessible.

cover

QDyLoRA in Action: Method, Benchmarks, and Why It Outperforms QLoRA

1 Jul 2025

QDyLoRA beats QLoRA by dynamically finding the optimal rank during quantized fine-tuning, proving more memory-efficient and effective on large LLMs.