![Bum Chul Kwon | 권범철 | @bckwon@vis.social Profile](https://pbs.twimg.com/profile_images/857232987670360064/np2ggzaL_x96.jpg)
Bum Chul Kwon | 권범철 | @[email protected]
@BCKwon
Followers
754
Following
1K
Statuses
569
Researcher @IBMResearch. Data Visualization, Visual Analytics, Machine Learning, Health Care, HCI. Views are mine.
Joined April 2009
I am so delighted that our latest research was published by Nature Communications today! @SpringerNature @NatureComms
@vibhamath @jessdunnephd @jianying_hu
@markus76 @RVeijola @brigittef @IBMResearch @JDRF @medfak_LU Read here:
5
17
62
🚀 Excited to introduce MMELON—our new multi-view molecular foundation model! By combining graph, image, and text representations, MMELON delivers state-of-the-art performances for prediction and regression tasks. Code: Preprint:
Multi-view biomedical foundation models for molecule-target and property prediction @IBMResearch • The paper introduces MMELON, a multi-view molecular foundation model combining graph, image, and text views to enhance prediction of molecular properties. Unlike single-view models, MMELON leverages multiple representations for a richer, more versatile molecular embedding. • The model performs exceptionally well on 18 diverse tasks, including ligand-protein binding, molecular solubility, metabolism, and toxicity, balancing the strengths of each modality. This versatility is critical in drug discovery and computational chemistry. • MMELON integrates three views—graph, image, and text—to learn comprehensive molecular representations. The image view uses ImageMol (pre-trained on 10 million molecules), while the graph and text views are based on advanced transformer architectures, pre-trained on datasets of 200 million molecules. • A novel aspect is the “late fusion” of these different modalities, ensuring each modality contributes optimally depending on the downstream task. This approach yields interpretable results and allows for an analysis of how each view supports different predictions. • For validation, MMELON was applied to screen compounds against a large set of G Protein-Coupled Receptors (GPCRs). Of these, 33 GPCRs related to Alzheimer’s disease were identified, and strong binders were predicted, validated through in silico structure modeling. • The multi-view model shows strong correlations between predicted and experimental affinities, achieving a Pearson correlation of 0.78 for GPCR binding. This suggests the model’s robust application for identifying new therapeutics. • Compared to single-view models, MMELON delivers superior performance across classification and regression tasks, making it an essential tool for complex molecular property predictions in drug discovery. @jamorrone3 @jianying_hu @FeixiongCheng @jeriscience @BCKwon @timrumbell @dplatt_maths @YunguangQiu @diwakarmahajan 💻Code: 📜Paper: #biomedicalAI #drugdiscovery #foundationmodel #multiviewlearning #GPCR #Alzheimers #machinelearning #bioinformatics
0
1
10
RT @NobelPrize: BREAKING NEWS The 2024 #NobelPrize in Literature is awarded to the South Korean author Han Kang “for her intense poetic pro…
0
41K
0
RT @graceguo43: Counterfactuals explain and reduce over-reliance on AI in clinical settings, but how do we create counterfactuals for image…
0
4
0
RT @JustinMatejka: We're hiring! The HCI & Visualization group at Autodesk Research is hiring for two Research Scientist positions at the…
0
34
0
We extended the due date for the #CHI2024 visualization literacy workshop to March 7! With co-organizers: @laneharrison @mjskay @benjbach @michelle_borkin @Birdbassador @alark @alvittao @LacePadilla @EvanMPeck Karen Bonilla Yuan Cui Yiren Ding Lily Ge Maryam Hedayati David Rapp
What is visualization literacy? How can we measure it? How can we improve it for everyone? Submit your work to our CHI 2024 workshop by Feb 29 and join our discussions on defining, studying, and enhancing visualization literacy for all.
0
4
12
@graceguo43 @EndertAlex @ehudkar Woohoo! Great to finally hear the official announcement! Congrats, Grace!!! 🥳🤩🙌
0
0
1
RT @NElmqvist: It’s Friday, it’s the last day of #ieeevis, and we’re now getting ready for our paper on “Visualization Thumbnails”🎞️🌆🌁🌃 by…
0
2
0
RT @prasatti: 🚀 Exciting Opportunity Alert! 🌟 Join our team as a Research Intern and contribute to the future of trustworthy foundation mod…
0
21
0
Excited to be in Toronto for #ACL2023NLP! Check out the paper, source code, and video of our paper Finspector. I'll be presenting Finspector at @ibmresearch booth around 9am - 10am Monday & Tuesday and at the main conference hall around 11am - 12:30pm Wednesday. Let's talk 🤩!
How can we uncover hidden biases in language models that impact fairness? Our #ACL2023 demo paper introduces Finspector, an interactive visualization widget available as a Python package for Jupyter. Paper, Video, Code: @nandanamihindu #nlp #fairness
0
2
12
RT @angie_boggust: It was a blast working on VisText with @bennyjtang and @arvindsatya1, and I can't wait to see how the dataset can suppor…
0
6
0