Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.11851/12780| Title: | A LoRa Based Method for Efficient Model Parameter Reduction | Authors: | Sahin, A. Cinar, M. Akgün, T. |
Keywords: | CNN LoRa Model Optimization Parameter Efficiency Parameter Reduction Transformer |
Publisher: | Institute of Electrical and Electronics Engineers Inc. | Abstract: | This paper presents an efficient model parameter reduction technique that is applicable across a wide range of neural network architectures, including convolutional neural networks (CNNs) and transformer-based models. Motivated by the LoRA (low-rank adaptation) method originally proposed for efficiently fine-tuning transformer architectures, the presented technique aims to enhance model parameter efficiency by leveraging a generalized approach that can be seamlessly integrated into virtually any network architecture. Extensive experiments with state-of-the-art CNN and transformer models demonstrate the robustness and versatility of the proposed technique in improving model parameter efficiency by achieving the same level of accuracy while using almost half as many parameters. The results highlight potential of this method as a universal optimization strategy for modern deep learning frameworks, offering a valuable tool for practitioners and researchers seeking to lower computation load and memory usage of deep models during inference. © 2025 Elsevier B.V., All rights reserved. | URI: | https://doi.org/10.1109/AMLDS63918.2025.11159364 https://hdl.handle.net/20.500.11851/12780 |
ISBN: | 9798331510992 |
| Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection |
Show full item record
CORE Recommender
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.