The Quest for Winning Tickets in Low-Rank Adapters
Jan 23, 2025·
        
        
        
          
          
          
            
            
          
          
          
          ,
          
            
            
          
          
          
          ,
          
            
            
          
          
          
          ,
          
            
            
          
          
          
          ,
          
            
            
          
          
        
        ·
        
          0 min read
        
        
        
        Hamed Damirchi
          Cristian Rodriguez-Opazo
          Ehsan Abbasnejad
          Zhen Zhang
          Javen Qinfeng Shi
          Abstract
          Low-Rank Adaptation (LoRA) fine-tunes large pre-trained models efficiently by adding low-rank parameter matrices. However, LoRA unnecessarily updates entire parameter blocks, despite only needing to adjust task-relevant subspaces. Inspired by the Lottery Ticket Hypothesis (LTH), we explore sparse subnetworks within low-rank adapters and find that ‘winning tickets’ exist where LoRAs randomly pruned to a task-specific sparsity, can achieve the same performance as dense adapters. Building on this, we propose Partial-LoRA, which identifies and integrates sparse low-rank parameters linked to key subspaces of pre-trained weights. Experiments across 8 vision and 7 language tasks show that Partial-LoRA reduces trainable parameters by up to 87% while maintaining or improving performance, significantly lowering memory use, and offering a theoretical foundation for sparse LoRA designs.
        
        
        
          Type
          
        
        
          Publication
          Preprints