Abstract: Existing methods to fine-tune LLMs, like Adapter, Prefix-tuning, and LoRA, which introduce extra modules or ad-ditional input sequences to inject new skills or knowledge, may compromise the ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results