![Adam Hibble Profile](https://pbs.twimg.com/profile_images/1349725701654474761/lzfLod4M_x96.jpg)
Adam Hibble
@Algomancer
Followers
4K
Following
2K
Statuses
3K
I generate models that generate other stuff, working on @mancerlabs -- Prev: Founder of Popgun Labs (Techstars), Founder of the @QUTCode Network.
Joined August 2013
Training a JEPA with randomly sampled projectors enforces more pairwise independence with (VCREG on projector(z)) with distilation and layerwise targets converges a lot quicker than vanilla JEPA in my very early tests. No collapse issues on a wide set of hyper parameters. Still more training to do but its looking like a really great configuration.
1
5
70
RT @rosmine_b: lol I was playing around with a vision language model, and it thinks this xfinity logo is a hate symbol (see below for full…
0
1
0
@NickMayumu @jerber888 @ylecun Did, shared results and some ablations and extensions in discord back then.
0
0
3
@mayfer @iScienceLuvr Old version was with a vae back then.
Btw @Christo_crass this is how the mancer model worked a few years ago, just with a vamp prior latent instead of something like jepa.
0
0
2