Paper: Large Language Model Meets Graph Neural Network in Knowledge Distillation
Conclusion: In this paper, we propose a novel LLM-to-GNN knowledge
distillation framework termed LinguGKD, which integrates the
semantic understanding capabilities of LLMs with the efficiency and structural insights of GNNs. LinguGKD employs TAG-oriented instruction tuning to train pre-trained LLMs
as teacher models and introduces a layer-adaptive contrastive
distillation strategy to align and transfer node features between
teacher LLMs and student GNNs within a latent space. Extensive experiments across various LLM and GNN architectures on multiple datasets demonstrates that LinguGKD significantly
enhances the predictive accuracy and convergence rate of
GNNs without requiring additional training data or model
parameters, making them highly practical for deployment
in resource-constrained environments. Moreover, LinguGKD
shows great potential for leveraging advancements in LLM
research to continuously augment GNN performance.
~~~~~~~~
Hi there, I am Jack See, a PhD student who is working on AI models for molecular graph prediction. In this video, I will be explaining knowledge distillation from LLM to GNN. Enjoy yourself and leave any comments!
Find me on:
-Twitter: https:/_/ JackSee47284524 (remove the underscore)
-Linkedin: https:/_/www.linkedin.com/in/jack-see-096212244/ (remove the underscore)
#ai #research #airesearch #machinelearning #deeplearning #largelanguagemodels
16 сен 2024