Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11851/10873
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, S.-
dc.contributor.authorEravci, B.-
dc.contributor.authorGuliyev, R.-
dc.contributor.authorFerhatosmanoglu, H.-
dc.date.accessioned2023-12-23T06:06:36Z-
dc.date.available2023-12-23T06:06:36Z-
dc.date.issued2023-
dc.identifier.isbn9798400701245-
dc.identifier.urihttps://doi.org/10.1145/3583780.3614955-
dc.identifier.urihttps://hdl.handle.net/20.500.11851/10873-
dc.descriptionACM SIGIR;ACM SIGWEBen_US
dc.description32nd ACM International Conference on Information and Knowledge Management, CIKM 2023 -- 21 October 2023 through 25 October 2023 -- 193792en_US
dc.description.abstractGraph Neural Network (GNN) training and inference involve significant challenges of scalability with respect to both model sizes and number of layers, resulting in degradation of efficiency and accuracy for large and deep GNNs. We present an end-to-end solution that aims to address these challenges for efficient GNNs in resource constrained environments while avoiding the oversmoothing problem in deep GNNs. We introduce a quantization based approach for all stages of GNNs, from message passing in training to node classification, compressing the model and enabling efficient processing. The proposed GNN quantizer learns quantization ranges and reduces the model size with comparable accuracy even under low-bit quantization. To scale with the number of layers, we devise a message propagation mechanism in training that controls layer-wise changes of similarities between neighboring nodes. This objective is incorporated into a Lagrangian function with constraints and a differential multiplier method is utilized to iteratively find optimal embeddings. This mitigates oversmoothing and suppresses the quantization error to a bound. Significant improvements are demonstrated over state-of-the-art quantization methods and deep GNN approaches in both full-precision and quantized models. The proposed quantizer demonstrates superior performance in INT2 configurations across all stages of GNN, achieving a notable level of accuracy. In contrast, existing quantization approaches fail to generate satisfactory accuracy levels. Finally, the inference with INT2 and INT4 representations exhibits a speedup of 5.11 × and 4.70 × compared to full precision counterparts, respectively. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.en_US
dc.description.sponsorshipEngineering and Physical Sciences Research Council, EPSRC: EP/T51794X/1en_US
dc.language.isoenen_US
dc.publisherAssociation for Computing Machineryen_US
dc.relation.ispartofInternational Conference on Information and Knowledge Management, Proceedingsen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectgraph neural networksen_US
dc.subjectlarge-scale graph managementen_US
dc.subjectoversmoothing in GNNsen_US
dc.subjectquantizationen_US
dc.subjectscalable machine learningen_US
dc.subjectBackpropagationen_US
dc.subjectDeep neural networksen_US
dc.subjectGraph neural networksen_US
dc.subjectMessage passingen_US
dc.subjectMultilayer neural networksen_US
dc.subjectQuadratic programmingen_US
dc.subjectGraph neural networksen_US
dc.subjectLarge-scale graph managementen_US
dc.subjectLarge-scalesen_US
dc.subjectMessage propagationen_US
dc.subjectModel sizeen_US
dc.subjectNumber of layersen_US
dc.subjectOversmoothing in GNNen_US
dc.subjectQuantisationen_US
dc.subjectQuantizersen_US
dc.subjectScalable machine learningen_US
dc.subjectIterative methodsen_US
dc.titleLow-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message Propagationen_US
dc.typeConference Objecten_US
dc.departmentTOBB ETÜen_US
dc.identifier.startpage2626en_US
dc.identifier.endpage2636en_US
dc.identifier.wosWOS:001161549502068en_US
dc.identifier.scopus2-s2.0-85178096698en_US
dc.institutionauthor-
dc.identifier.doi10.1145/3583780.3614955-
dc.authorscopusid57220211189-
dc.authorscopusid43260940300-
dc.authorscopusid57472855200-
dc.authorscopusid6602337538-
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.openairetypeConference Object-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.grantfulltextnone-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Show simple item record



CORE Recommender

Page view(s)

4
checked on Jun 3, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.