Abstract: Existing methods to fine-tune LLMs, like Adapter, Prefix-tuning, and LoRA, which introduce extra modules or ad-ditional input sequences to inject new skills or knowledge, may compromise the ...