Haitao Mao Profile Banner
Haitao Mao Profile
Haitao Mao

@haitao_mao_

Followers
962
Following
305
Media
15
Statuses
207

Final-year PhD @MSU , Graph Foundation Models, Network Science, morality in LLMs, LoG 2024 Organizer. looking for a postdoc position

Joined September 2022
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@haitao_mao_
Haitao Mao
4 months
We discuss the potential for the new Graph Foundation Model era in this blog. It is a great pleasure to work with @michael_galkin @mmbronstein @AndyJiananZhao @zhu_zhaocheng . See more details in our paper: and reading list
@michael_galkin
Michael Galkin
4 months
📢 In our new blogpost w/ @mmbronstein @haitao_mao_ @AndyJiananZhao @zhu_zhaocheng we discuss foundation models in Graph & Geometric DL: from the core theoretical and data challenges to the most recent models that you can try already today!
2
39
124
1
3
15
@haitao_mao_
Haitao Mao
3 months
Unfortunately, due to visa issues, I will be unable to attend ICML this year🥲. Our Graph Foundation paper will be presented on Thursday. I am always enthusiastic about this topic and am looking forward to pursuing a related postdoctoral position in the next academic year.
Tweet media one
0
6
164
@haitao_mao_
Haitao Mao
10 months
Just met my early PhD crisis these days. Getting super frustrated and always woke up early in the morning. Have you met it before? Have do you usually deal with such a crisis?🥺
15
0
75
@haitao_mao_
Haitao Mao
10 months
My best research partner Zhikai Chen @drshadyk98 recently proposed a roadmap to track the recent progress on the graph foundation model. We are still quite far from this goal. Thanks for your contribution in advance!
1
13
61
@haitao_mao_
Haitao Mao
6 months
Position paper: graph foundation model is accepted by #ICML2024 . We provide a rough description on this direction. A new better version will be wrapped up soon. This topic is still new without consensus. Welcome all your feedback to make it a better one!
3
9
51
@haitao_mao_
Haitao Mao
4 months
Feeling conflicted about Graph Foundation Models while making slides for my PhD research. Initially, I believed no GNN could work universally across all scenarios. I am on the academic job market for postdoc positions, hoping for good opportunities this year.
Tweet media one
5
10
46
@haitao_mao_
Haitao Mao
5 months
The new version of Graph Foundation Model is now available . We add multiple new contents: (1) a more clear definition (2) actionable step inspired by principles (3) new understanding on LLMs (4) the advantage over other FMs (5) multiple new discussions
3
11
46
@haitao_mao_
Haitao Mao
9 months
See our new repo including (1) theoretical guidance (2) existing benchmark datasets and (3) existing GFM summarization. The new seminar focusing on GFM will be on board soon!!!
4
16
38
@haitao_mao_
Haitao Mao
11 months
I will be in NeurIPS this year~ I am interested in (1)potential for LLM with graph (2)how to build graph foundation model (3)How to use graph to enhance LLM. Feel free to leave me message or directly go to my poster. Let's have more insights and push the future for graph.
0
2
33
@haitao_mao_
Haitao Mao
8 months
I am super happy to be one of the student organizers for the next Learning on Graphs conference. So excited to be on this wonderful team which I always dreamed of!!!
1
0
29
@haitao_mao_
Haitao Mao
6 months
Thanks to my friend @Abel0828 for helping me to present my paper: revisiting link prediction: a data perspective at #ICLR2024 . See details in . Hope I can attend ICLR in person the next time
Tweet media one
2
1
27
@haitao_mao_
Haitao Mao
5 months
Just finish write a new version of our GFM paper . Super happy and passionate along this journey. Many new insights are added with the help of @michael_galkin . The revision with an interesting new title will be on board soon!
Tweet media one
3
3
25
@haitao_mao_
Haitao Mao
1 month
Starting the internship journey at Snap inc. Bellevue
Tweet media one
1
0
25
@haitao_mao_
Haitao Mao
6 months
"A Data Generation Perspective to the Mechanism of In-Context Learning". We investigate the mechanism of In-context learning which helps ground debate on whether LLM can achieve intelligence to whether LLM can learn new data generation function in context
2
5
23
@haitao_mao_
Haitao Mao
9 months
Just realized that this is already my fifth year working on the graph domain. The first paper I read in the graph domain is TransE, and the first GNN-related paper I read is SRGNN. How time flies~
0
0
24
@haitao_mao_
Haitao Mao
6 months
I am pleased to announce that Graph Foundation Model Workshop for TheWebConf 2024 will be held on Monday, May 13. The seminar speakers and panel discussion will share the research developments about GFM. View the webiste for more information
0
5
23
@haitao_mao_
Haitao Mao
11 months
The Learning on Graph local meetup at Notre Dame has determined its time on Jan 14, 2024. The form for participation: The form for speakers. Welcome all graph related topic especially log accepted papers
1
9
23
@haitao_mao_
Haitao Mao
1 year
Check our efforts on incorporating LLM on Graph. Our repository is efficient and easy to use for your next step research. Welcome to star and follow!
@drshadyk98
czk
1 year
📢 Dive into our latest papers on empowering Graph Machine Learning with LLM! 🚀 Discover three transformative pipelines: LLM-as-enhancer, LLM-as-predictor, and LLM-as-annotator. 📊🧠 Don't miss out! #GraphML #LLMs
Tweet media one
2
8
19
0
2
21
@haitao_mao_
Haitao Mao
5 months
Can one GFM benefit from pretraining from arbitrary graph data? 🎯Introduce Cross-Domain Graph Data Scaling: A Showcase with Diffusion Models. 🎯Pre-train on graphs from 33 domains, UniAug can improve the performance across domains and downstream tasks. 🎯
Tweet media one
1
3
21
@haitao_mao_
Haitao Mao
10 months
Our LoG local meetup is on January 14th at Room 303, Cushing Hall of Engineering, Notre Dame, IN 46556. We have a section for junior students to ask questions to senior guys. You can post your question Agenda details are in
0
5
16
@haitao_mao_
Haitao Mao
8 months
Thanks hardy to introduce our recent work graph foundation model check our vocabulary perspective on gfm design. It helps us to connect network analysis, expressiveness,network stability with gfm design.
@jandland2
Hardy Jeong
8 months
I just published graph omakase about Graph Foundation Models The discussion revolves around the critical role of Foundation Models within the artificial intelligence (AI) domain, emphasizing their growing importance due to the exponential increase in available models and data
0
0
10
1
2
15
@haitao_mao_
Haitao Mao
1 year
I am even more exciting to bid paper on learning on graph @LogConference than NeurIPS. The paper quality is so high and attractive!
0
2
15
@haitao_mao_
Haitao Mao
10 months
Hi all, we will have the log meetup at Notre Dame 14th of January 2024. We are still looking for offline invited speakers. Speaker form can be found here.
2
8
14
@haitao_mao_
Haitao Mao
5 months
Graph Foundation Model Workshop @TheWebConf was a success. This is my first time leading organization of a workshop, meeting many difficulties than imagination. I have learned many new concepts this time. A more open, exciting GFM-related workshop is coming in the near future
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
13
@haitao_mao_
Haitao Mao
1 year
After two years of effort in reviewing NeurIPS and ICML, I am excited to finally get the chance to serve as a reviewer for ICLR!
2
0
10
@haitao_mao_
Haitao Mao
9 months
Congrats to all the authors got accepted by #WWW2024 If you already plan a trip to WWW, welcome to submit papers to our WWW Graph Foundation Model Workshop (GFM) . For more details, please visit the official website: The submission deadline is February 5.
1
1
10
@haitao_mao_
Haitao Mao
6 months
The most amazing paper I have read during my PhD so far.
1
0
10
@haitao_mao_
Haitao Mao
2 months
Just be a little bit wondering if there is any good tutorial for the first-time reviewer(author) to learn how to write a good review(rebuttal). Sometimes I struggle to conduct a meaningful discussion (either positive or negative attitude towards the paper.) instead of quarrel🥲
1
0
8
@haitao_mao_
Haitao Mao
1 year
See our new preprint about the first graph transformer in Link Prediction. It is efficient, effective, and adaptive!.
@hshomer97
Harry Shomer
1 year
📢New preprint "Adaptive Pairwise Encodings for Link Prediction" We propose a new graph transformer designed specifically for link prediction - LPFormer. LPFormer adaptively tailors the pairwise information to each link by modeling multiple factors integral to link prediction
1
7
9
1
0
7
@haitao_mao_
Haitao Mao
5 months
When talking about Graph Foundation Model, how important the feature heterogeneity problem is, and how can we effectively solve it?
2
0
8
@haitao_mao_
Haitao Mao
2 years
I’m traveling to New Orleans ✈️ to attend NeurIPS next week! Please reach out if you would like to meet up! I'll be presenting two papers about generalization and unbiased learning to rank respectively
2
0
7
@haitao_mao_
Haitao Mao
5 months
The first in-person LoG conference you cannot miss!!! See you in LA!
@miniapeur
Mathieu Alain
5 months
(1/2) The @LogConference is the leading conference dedicated to graph machine learning. The third edition is not happening this year, but next year (2025). Don't be too sad though, we are preparing something even bigger: there is going to be an in-person main event at @UCLA . If
Tweet media one
3
15
88
0
0
6
@haitao_mao_
Haitao Mao
1 year
check our new work
@dse_msu
DSE Lab @ MSU
1 year
Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking? #NeurIPS2023 - We reveal several pitfalls in link prediction. - We conducted benchmarking to facilitate a fair and consistent evaluation. Check out:
0
5
20
0
0
6
@haitao_mao_
Haitao Mao
11 months
Also believe this principle. The new paper should often be better than the previous one
Tweet media one
0
1
4
@haitao_mao_
Haitao Mao
7 months
Learn a lot from the talented work by IR experts, Philipp Hager, @RDeffayet , and @mdr . The paper finds many pitfalls in our Baidu-ULTR dataset and points out solid new directions. Compared with them, I am only a naive outsider in this domain. Nice work!
3
0
5
@haitao_mao_
Haitao Mao
1 year
For a comprehensive understanding, we provide slides and blogs When do GNNs work well on the node classification task? When do GNNs fail on the node classification task? Slides: blog:
1
1
4
@haitao_mao_
Haitao Mao
5 months
UniAug a structural-only graph diffusion model to pre-train on graph structures across all the domains, aiming to understand the complicated structure graphs from various domains. Then the pre-training model is finetuned to generate data argumentation for the downstream task.
Tweet media one
1
0
4
@haitao_mao_
Haitao Mao
2 months
@chaitjo My experience is much simplre, the reviewer just one sentence: he does not believe our method can work. Rating 3 with confidence 5.
0
0
4
@haitao_mao_
Haitao Mao
4 months
Sometimes feel happy and confident about the progress in GFMs. However, I always feel still know too little about the large graph community. There are still too many mysteries idk, e.g., why pre-training on social networks can benefit molecular graphs.
1
0
3
@haitao_mao_
Haitao Mao
1 year
Check our paper! one dollar can label a graph of 2.4m+ nodes with 75% accuracy!
@dse_msu
DSE Lab @ MSU
1 year
📢Train GNNs without human labeling! We propose a novel pipeline, Label-free Node Classification on Graphs with Large Language Models(LLM-GNN). Our pipeline can achieve an accuracy of 74.9% on a vast-scale dataset ogb-products with a cost of less than 1 dollar.
1
1
10
0
0
3
@haitao_mao_
Haitao Mao
1 year
Our research provides: (1) Interesting toy examples (2) Rigorous theoretical analysis (3) Comprehensive experimental results (4) Practical guidance for practitioner (5) New application & scenarios (6) Fruitful future directions
2
0
3
@haitao_mao_
Haitao Mao
6 months
@PetarV_93 Thanks, Petar! I will add it to our revision.
0
0
3
@haitao_mao_
Haitao Mao
9 months
What is the usage of the mechanism understanding in LLM?
0
0
2
@haitao_mao_
Haitao Mao
2 years
📣 my labmate @weisshelter on the academic job market this year! He is broadly interested in #MachineLearning (ML) and #DataScience , especially in graph ML, data-centric AI and trustworthy AI, with applications in computational biology, social good
0
1
1
@haitao_mao_
Haitao Mao
3 months
Such a change will alter the quantitative conclusion in Lemma 2, but a higher LSP will still result in a lower FP. Nonetheless, the reciprocal relationship will be transformed into a linear one.
Tweet media one
1
0
2
@haitao_mao_
Haitao Mao
5 months
@YuanqiD Thanks Yuanqi. I am going to mention this universal structure point in our recent blog. It is still kinda of surprising that the transfer happens as molecular graph is in natural while social graph is manually conducted. There is large room to explore and think about why
0
0
2
@haitao_mao_
Haitao Mao
3 months
Just revisited my previous papers today and find one little bug in my proof (change some constant terms, do not influence any claim and conclusion) and relax one data assumption to make the proof more realistic. Will update the new version on arxiv soon. Details are as follows:
2
0
2
@haitao_mao_
Haitao Mao
6 months
Still a rookie work where I found there are many core challenges in nlp domain which I never considered before😂
0
0
2
@haitao_mao_
Haitao Mao
2 years
hall j 1013 11am see you there for my paper a large scale dataset for unbiased learning to rank
0
0
2
@haitao_mao_
Haitao Mao
3 months
Thanks to @Abel0828 who first points out this issue, helps my paper to a better version!
0
0
2
@haitao_mao_
Haitao Mao
4 months
Just my own random thoughts🤣. Sometimes, I feel GFM is a useful tool that could have many applications while GFMs (I developed) remain some gaps when applying specific practical scenarios. Need one step closer with the help of domain knowledge, but not clear where to go
1
0
2
@haitao_mao_
Haitao Mao
5 months
This is definitely not the final version of this paper (Actually with many modifications during the submission to arxiv). We welcome all your suggestions to make this position paper a better one!
1
0
1
@haitao_mao_
Haitao Mao
4 months
@ShuiwangJi @michael_galkin @YuanqiD @mmbronstein @AndyJiananZhao @zhu_zhaocheng Yes, that is an interesting point😂. A strategy I use for reading GFM paper is to first read the experiment setting and major results and then read the abs, intro
0
0
2
@haitao_mao_
Haitao Mao
10 months
Given that we can only run a finite number of tests, how can we understand a system whose potential scope is infinite?
1
0
2
@haitao_mao_
Haitao Mao
5 months
Besides performance gain across node classification, link prediction, and graph classification, positive transfer can be found across domains. UniAug may outperform a domain-specific pre-train model in some cases. Notably, UniAug only utilizes structures with no specific design
Tweet media one
0
0
1
@haitao_mao_
Haitao Mao
2 years
welcome to join our wsdm cup competition!
@WSDMSocial
WSDM Conference
2 years
We're launching #WSDM2023 challenges brought to you by Baidu and MSU. Task 1: Unbiased Learning for Web Search Task 2: Pre-training for Web Search: Top ranks share USD7,000 in prizes and get conference registrations to WSDM-23.
Tweet media one
3
7
13
0
0
1
@haitao_mao_
Haitao Mao
9 months
1
0
1
@haitao_mao_
Haitao Mao
2 months
@deviparikh @chaitjo Thanks for sharing!
0
0
1
@haitao_mao_
Haitao Mao
5 months
It is LoG 2024!!! We are looking for reviewers!
@LogConference
Learning on Graphs Conference 2024
5 months
LoG Conference 2024 is back !!!👉 We are looking for more reviewers! We have a special emphasis on review quality via monetary rewards, a more focused conference topic, and low reviewer load (max 3 papers). But for this we need your help! Sign up here: !
2
50
120
0
0
1
@haitao_mao_
Haitao Mao
3 months
In Lemma 1 of paper "Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?" I miscopy one term in the detailed proof, leading to the \sqrt{2\pi} term, it should be removed.
Tweet media one
0
1
1
@haitao_mao_
Haitao Mao
4 months
Feel free to leave comments on our paper. I will keep updating our paper towards a more holistic view.
0
0
1
@haitao_mao_
Haitao Mao
4 months
Every month jumping between convincing myself and doubt myself🤣
0
0
1
@haitao_mao_
Haitao Mao
6 months
Previously, I thought this paper was more like my personal interest. Now I somehow realize more value in this paper. Maybe the data generation function is a promising understanding perspective for LLM, which indeed provides a more high level abstraction
1
0
1
@haitao_mao_
Haitao Mao
4 months
🙌
@LogConference
Learning on Graphs Conference 2024
4 months
Our new call for #LoG2024 local meetups is out! This "network" of local mini-conferences aims to bring together attendees belonging to the same geographic area, fostering discussions and collaborations.If you are interested in hosting one, please read this
1
11
36
0
0
1
@haitao_mao_
Haitao Mao
5 months
We add discussion on the GFM on algorithm reasoning task from @PetarV_93 and discuss the algorithm alignment may provide an additional advantage over LLMs. The corresponding CLRS dataset is also included in the dataset summarization.
0
0
1
@haitao_mao_
Haitao Mao
8 months
We will update the discussion on graph llm later with new understanding recently
0
0
1
@haitao_mao_
Haitao Mao
4 months
LoG2024 incoming!!!!
@LogConference
Learning on Graphs Conference 2024
4 months
Call for tutorials is already up on the website. See more details at . Important Deadline for Tutorial proposals: Monday, September 23, 2024. Looking forward to your submission!!
0
10
26
0
0
1
@haitao_mao_
Haitao Mao
10 months
We invite work in theory, methods, and applications related to graph foundation models. We are particularly interested in applications on large networks and Ai4science tasks. Both published and under-review papers (published papers will not be archived)are welcome for submission.
1
1
1
@haitao_mao_
Haitao Mao
5 months
I think the current paper still lacks many principal discussions revolving around geometric, expressiveness, and so on. However, I am not an expert in this domain. I will try my best to update this part and learn more related knowledge
1
0
1
@haitao_mao_
Haitao Mao
9 months
We invite work in theory, methods, and applications related to graph foundation models. We are particularly interested in applications on large networks and Ai4science tasks. Both published and under-review papers (published papers will not be archived)are welcome for submission.
0
0
0
@haitao_mao_
Haitao Mao
1 year
When do GNNs work and fail on the link prediction task? slides:
0
1
1
@haitao_mao_
Haitao Mao
2 years
@YuanqiD So happy to have this year with u~
0
0
1
@haitao_mao_
Haitao Mao
7 months
@chaitjo Do we need to take some actions on the next log😂
1
0
1
@haitao_mao_
Haitao Mao
2 years
welcome to join kdd cup 2023
@kdd_news
SIGKDD 2024
2 years
🎉Attention all data enthusiasts and machine learning experts! The Amazon KDD Cup 2023 competition is here with an exciting challenge on session-based recommendation systems. #KDDCup2023 #RecommendationSystems
2
12
51
0
0
1
@haitao_mao_
Haitao Mao
1 year
@SitaoLuan Thanks sitao. I notice your work as soon as it is on arxiv. I also have a detailed discussion with your paper in the related work section😂
0
0
1
@haitao_mao_
Haitao Mao
11 months
Join the log!
@LogConference
Learning on Graphs Conference 2024
11 months
LoG is happening tomorrow! Highlights of the program: 🎤Exciting keynotes from @jure , @andreasloudaros , Stefanie Jegelka, @KyleCranmer , @ktschuett 🌟 12 orals 💻 Tutorials on scalability & recommendation 🤗 poster sessions & networking Join now via
Tweet media one
1
32
109
0
0
1
@haitao_mao_
Haitao Mao
3 months
In the network model of paper "Revisiting Link Prediction: A Data Perspective". The \beta is for the feature proximity. It is unnecessary to have (1-\beta) term in eq (1), which correspond to the feature similarity may decrease the similarity between two nodes
Tweet media one
1
0
1
@haitao_mao_
Haitao Mao
9 months
Thanks to the effort of this amazing guy @drshadyk98 !
0
0
1