Unfortunately, due to visa issues, I will be unable to attend ICML this year🥲. Our Graph Foundation paper will be presented on Thursday. I am always enthusiastic about this topic and am looking forward to pursuing a related postdoctoral position in the next academic year.
Just met my early PhD crisis these days. Getting super frustrated and always woke up early in the morning. Have you met it before? Have do you usually deal with such a crisis?🥺
My best research partner Zhikai Chen
@drshadyk98
recently proposed a roadmap to track the recent progress on the graph foundation model. We are still quite far from this goal. Thanks for your contribution in advance!
Position paper: graph foundation model is accepted by
#ICML2024
. We provide a rough description on this direction. A new better version will be wrapped up soon. This topic is still new without consensus. Welcome all your feedback to make it a better one!
Feeling conflicted about Graph Foundation Models while making slides for my PhD research. Initially, I believed no GNN could work universally across all scenarios. I am on the academic job market for postdoc positions, hoping for good opportunities this year.
The new version of Graph Foundation Model is now available . We add multiple new contents: (1) a more clear definition (2) actionable step inspired by principles (3) new understanding on LLMs (4) the advantage over other FMs (5) multiple new discussions
See our new repo including (1) theoretical guidance (2) existing benchmark datasets and (3) existing GFM summarization. The new seminar focusing on GFM will be on board soon!!!
I will be in NeurIPS this year~ I am interested in (1)potential for LLM with graph (2)how to build graph foundation model (3)How to use graph to enhance LLM. Feel free to leave me message or directly go to my poster. Let's have more insights and push the future for graph.
I am super happy to be one of the student organizers for the next Learning on Graphs conference. So excited to be on this wonderful team which I always dreamed of!!!
Thanks to my friend
@Abel0828
for helping me to present my paper: revisiting link prediction: a data perspective at
#ICLR2024
. See details in . Hope I can attend ICLR in person the next time
Just finish write a new version of our GFM paper . Super happy and passionate along this journey. Many new insights are added with the help of
@michael_galkin
. The revision with an interesting new title will be on board soon!
"A Data Generation Perspective to the Mechanism of In-Context Learning". We investigate the mechanism of In-context learning which helps ground debate on whether LLM can achieve intelligence to whether LLM can learn new data generation function in context
Just realized that this is already my fifth year working on the graph domain. The first paper I read in the graph domain is TransE, and the first GNN-related paper I read is SRGNN. How time flies~
I am pleased to announce that Graph Foundation Model Workshop for TheWebConf 2024 will be held on Monday, May 13. The seminar speakers and panel discussion will share the research developments about GFM. View the webiste for more information
The Learning on Graph local meetup at Notre Dame has determined its time on Jan 14, 2024.
The form for participation:
The form for speakers. Welcome all graph related topic especially log accepted papers
📢 Dive into our latest papers on empowering Graph Machine Learning with LLM! 🚀 Discover three transformative pipelines: LLM-as-enhancer, LLM-as-predictor, and LLM-as-annotator. 📊🧠 Don't miss out!
#GraphML
#LLMs
Can one GFM benefit from pretraining from arbitrary graph data?
🎯Introduce Cross-Domain Graph Data Scaling: A Showcase with Diffusion Models.
🎯Pre-train on graphs from 33 domains, UniAug can improve the performance across domains and downstream tasks.
🎯
Our LoG local meetup is on January 14th at Room 303, Cushing Hall of Engineering, Notre Dame, IN 46556.
We have a section for junior students to ask questions to senior guys. You can post your question
Agenda details are in
Thanks hardy to introduce our recent work graph foundation model check our
vocabulary perspective on gfm design. It helps us to connect network analysis, expressiveness,network stability with gfm design.
I just published graph omakase about Graph Foundation Models
The discussion revolves around the critical role of Foundation Models within the artificial intelligence (AI) domain, emphasizing their growing importance due to the exponential increase in available models and data
Hi all, we will have the log meetup at Notre Dame 14th of January 2024. We are still looking for offline invited speakers. Speaker form can be found here.
Graph Foundation Model Workshop
@TheWebConf
was a success. This is my first time leading organization of a workshop, meeting many difficulties than imagination. I have learned many new concepts this time. A more open, exciting GFM-related workshop is coming in the near future
Congrats to all the authors got accepted by
#WWW2024
If you already plan a trip to WWW, welcome to submit papers to our WWW Graph Foundation Model Workshop (GFM) . For more details, please visit the official website: The submission deadline is February 5.
Just be a little bit wondering if there is any good tutorial for the first-time reviewer(author) to learn how to write a good review(rebuttal). Sometimes I struggle to conduct a meaningful discussion (either positive or negative attitude towards the paper.) instead of quarrel🥲
📢New preprint "Adaptive Pairwise Encodings for Link Prediction"
We propose a new graph transformer designed specifically for link prediction - LPFormer. LPFormer adaptively tailors the pairwise information to each link by modeling multiple factors integral to link prediction
I’m traveling to New Orleans ✈️ to attend NeurIPS next week! Please reach out if you would like to meet up!
I'll be presenting two papers about generalization and unbiased learning to rank respectively
(1/2) The
@LogConference
is the leading conference dedicated to graph machine learning. The third edition is not happening this year, but next year (2025). Don't be too sad though, we are preparing something even bigger: there is going to be an in-person main event at
@UCLA
. If
Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking?
#NeurIPS2023
- We reveal several pitfalls in link prediction.
- We conducted benchmarking to facilitate a fair and consistent evaluation.
Check out:
Learn a lot from the talented work by IR experts, Philipp Hager,
@RDeffayet
, and
@mdr
. The paper finds many pitfalls in our Baidu-ULTR dataset and points out solid new directions. Compared with them, I am only a naive outsider in this domain. Nice work!
For a comprehensive understanding, we provide slides and blogs
When do GNNs work well on the node classification task?
When do GNNs fail on the node classification task?
Slides:
blog:
UniAug a structural-only graph diffusion model to pre-train on graph structures across all the domains, aiming to understand the complicated structure graphs from various domains. Then the pre-training model is finetuned to generate data argumentation for the downstream task.
Sometimes feel happy and confident about the progress in GFMs. However, I always feel still know too little about the large graph community. There are still too many mysteries idk, e.g., why pre-training on social networks can benefit molecular graphs.
📢Train GNNs without human labeling! We propose a novel pipeline, Label-free Node Classification on Graphs with Large Language Models(LLM-GNN). Our pipeline can achieve an accuracy of 74.9% on a vast-scale dataset ogb-products with a cost of less than 1 dollar.
📣 my labmate
@weisshelter
on the academic job market this year! He is broadly interested in
#MachineLearning
(ML) and
#DataScience
, especially in graph ML, data-centric AI and trustworthy AI, with applications in computational biology, social good
Such a change will alter the quantitative conclusion in Lemma 2, but a higher LSP will still result in a lower FP. Nonetheless, the reciprocal relationship will be transformed into a linear one.
@YuanqiD
Thanks Yuanqi. I am going to mention this universal structure point in our recent blog. It is still kinda of surprising that the transfer happens as molecular graph is in natural while social graph is manually conducted. There is large room to explore and think about why
Just revisited my previous papers today and find one little bug in my proof (change some constant terms, do not influence any claim and conclusion) and relax one data assumption to make the proof more realistic. Will update the new version on arxiv soon. Details are as follows:
Just my own random thoughts🤣. Sometimes, I feel GFM is a useful tool that could have many applications while GFMs (I developed) remain some gaps when applying specific practical scenarios. Need one step closer with the help of domain knowledge, but not clear where to go
This is definitely not the final version of this paper (Actually with many modifications during the submission to arxiv). We welcome all your suggestions to make this position paper a better one!
Besides performance gain across node classification, link prediction, and graph classification, positive transfer can be found across domains. UniAug may outperform a domain-specific pre-train model in some cases. Notably, UniAug only utilizes structures with no specific design
We're launching
#WSDM2023
challenges brought to you by Baidu and MSU.
Task 1: Unbiased Learning for Web Search
Task 2: Pre-training for Web Search:
Top ranks share USD7,000 in prizes and get conference registrations to WSDM-23.
LoG Conference 2024 is back !!!👉 We are looking for more reviewers! We have a special emphasis on review quality via monetary rewards, a more focused conference topic, and low reviewer load (max 3 papers). But for this we need your help! Sign up here: !
In Lemma 1 of paper "Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?" I miscopy one term in the detailed proof, leading to the \sqrt{2\pi} term, it should be removed.
Previously, I thought this paper was more like my personal interest. Now I somehow realize more value in this paper. Maybe the data generation function is a promising understanding perspective for LLM, which indeed provides a more high level abstraction
Our new call for
#LoG2024
local meetups is out! This "network" of local mini-conferences aims to bring together attendees belonging to the same geographic area, fostering discussions and collaborations.If you are interested in hosting one, please read this
We add discussion on the GFM on algorithm reasoning task from
@PetarV_93
and discuss the algorithm alignment may provide an additional advantage over LLMs. The corresponding CLRS dataset is also included in the dataset summarization.
Call for tutorials is already up on the website. See more details at . Important Deadline for Tutorial proposals: Monday, September 23, 2024. Looking forward to your submission!!
We invite work in theory, methods, and applications related to graph foundation models. We are particularly interested in applications on large networks and Ai4science tasks. Both published and under-review papers (published papers will not be archived)are welcome for submission.
I think the current paper still lacks many principal discussions revolving around geometric, expressiveness, and so on. However, I am not an expert in this domain. I will try my best to update this part and learn more related knowledge
We invite work in theory, methods, and applications related to graph foundation models. We are particularly interested in applications on large networks and Ai4science tasks. Both published and under-review papers (published papers will not be archived)are welcome for submission.
🎉Attention all data enthusiasts and machine learning experts! The Amazon KDD Cup 2023 competition is here with an exciting challenge on session-based recommendation systems.
#KDDCup2023
#RecommendationSystems
In the network model of paper "Revisiting Link Prediction: A Data Perspective". The \beta is for the feature proximity. It is unnecessary to have (1-\beta) term in eq (1), which correspond to the feature similarity may decrease the similarity between two nodes