![Patrick Kidger Profile](https://pbs.twimg.com/profile_images/1288452866315780099/E3jmyQK8_x96.jpg)
Patrick Kidger
@PatrickKidger
Followers
10K
Following
917
Statuses
1K
I do SciML + open source! 🧪ML+proteins@ https://t.co/04dWAWzCyl 📚Neural ODEs: https://t.co/ODOKWjub5k 🤖JAX ecosystem: https://t.co/8kXzaG9XVf 🧑💻Prev. Google, Oxford
Mostly Zürich? I travel a lot.
Joined July 2020
@typedfemale @avt_im +1 for Ax as a library I've enjoyed using. Although I've also not done that careful an evaluation so I'd be curious to see you report back on how the comparison goes!
0
0
1
This is a great essay. I think I'd add to it: one of the great difficulties is still the ability to 'debug' a biological system! My guess is that cloud labs + automation + ML will be the necessary pieces for making each part of the stack reliable - and thus lowering the barrier to entry.
my first (and possibly only) substack post! I write about why everyone in my lab is cringe and sucks. link in replies
0
5
41
✨Zurich AI meetup! ✨ We (Cradle) will be there to chat proteins+ML. 🚀 If you're in the city come say hi!
Zurich AI meetup speakers and friends for 4 Mar! - Peter Kontschieder, Research Director at Meta. We met back in Mapillary (acq. Meta) days! - Philippe Schwaller, Asst. Prof at EPFL, AI for chemistry. I love his papers. - Stef van Grieken, CEO at @cradlebio, AI for proteins. We met back in his Google days! This'll be a blast :)
2
1
20
@zhang_muru @togethercompute @MayankMish98 (a) this is very cool (b) this looks like a reversible architecture! Can you backprop in O(1) memory? C.f. also for the SOTA on reversible ODE solvers
0
0
3
✨We're hiring for bio roles at Cradle! Basically, we want quantitative bio PhDs who care about translational/applied research: bridging the details of real-world protein design projects and the capabilities of ML. You can expect to work on a huge variety of problems (antibodies, enzymes, peptides, ...) across all modalities (therapeutics, agtech, biosynthesis, ...), as at this point (post series B) we work with pretty much the whole industry. DMs open for any questions and link below! 💥⭐️
3
7
90
@AlexLaterre @charliermarsh Haha yes! I was thinking about this at the same time. I've previously written half-a-package-manager for a work project and... that's not something I want to do again. :p
0
0
2
Starting to read up on the announcement now. Thank you for the thread, this is super useful. Also, a big +1 to this point in particular. The university<>startup<>bigtech flywheel is still by far the strongest in the US, and IMO tighter connections like this are an important part of improving that.
1
0
4
✨ Also, note that Equinox is a library not a framework. That is to say, it interoperates with anything else in JAX. (C.f. PyTorch-vs-JAX, where once you've made a choice you kind of have to stick with it.) The implication is that you don't need to rely on network effects ('escape velocity') for it to be useful -- it'll still just work with whatever non-Equinox stuff you bring in as well! :)
1
1
5
@MilesCranmer First part -- great! That wasn't clear to me, but I'm glad that's possible. Second part -- shame I can't convince you :)
2
0
1
RT @rohitsingh8080: AbMAP, our work on adapting PLMs for antibodies, is now out in @PNASNews. A veritable zoo of PLMs now exists, so why…
0
33
0
On interleaving -- I don't mean interleaving gradients -- what I mean is, do you actually need to do any AD yourself *at all*? In pseudocode: ```python for _ in range(steps): equations = equations) equations = jax.optimize_via_gradient_descent(equations) frontier = pysr.pareto_frontier(equations) best_equation = frontier) ``` That is, PySR does a single round of genetic optimization, then returns some equations for me to push through my favourite AD framework (to optimize their coefficients) -- and then I can just trigger a new call to PySR to do another round of genetic optimization? This sounds like it might be what you're getting at with your second paragraph? If so, great! This would offer maximal flexibility for a user to do anything they like in AD, whilst enabling SymbolicRegression.jl to do the one thing it uniquely does (genetic optimization). And it means you don't need to also support the kitchen sink of possible AD operations. (Python-in-Julia would probably be very slow if you need to optimize a lot of equations. Julia-in-Python probably can't handle the full gamut of Python AD frameworks out there. So my hope is to make the two sit side-by-side instead.) On top the Julia-formatted string: right, I realise that you're passing this directly to Julia... and I think that's a mistake! I'm a Python programmer and I have to look up Julia syntax every time I use it. My IDE won't highlight typos or lint errors or whatever in embedded strings describing some-other-language. It's a big usability footgun.
1
0
0
Hmmm -- so I get pretty worried about trying to interleave AD systems! In short, that basically just sounds like a really tricky technical problem, and one that will be impossible to debug when it goes wrong for an end user. (Especially given how unreliable Julia AD is!) Moreover for AD-on-neural-networks, then I (as with many users) already have opinions and want to use some other framework in JAX (e.g. Optimistix) or PyTorch (e.g. Lightning), and I'd challenge you to be able to integrate with those. Can you remind me where AD enters the picture for you? The picture I have in my head is a flip-flop back and forth genetic->AD->genetic->AD to first optimize the symbolic form, then the coefficients, then repeat? Over how many systems in parallel? In terms of how to express the symbolic form: I'd definitely suggest using sympy instead of Julia-formatted strings! It's pretty much exactly what sympy is designed for, and means that users get a familiar flexible approach that doesn't involve learning a whole separate syntax. I've now written quite a few sympy<>torch><>jax converters, consuming this format is pretty easy.
1
0
0