neurallambda (open agi) Profile Banner
neurallambda (open agi) Profile
neurallambda (open agi)

@neurallambda

Followers
1,124
Following
389
Media
174
Statuses
1,814

building homoiconic AI and opensourcing it

USA
Joined January 2024
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@neurallambda
neurallambda (open agi)
28 days
by popular demand, here's a diagram and primer of the Homiconic LLM approach [1] (figures adapted from the Transfusion paper [2]) The "Data" portion of an LLM are vector inputs/outputs The "Computation" portion of an LLM are the Linear layers' matrix weights, frozen after
Tweet media one
5
6
34
@neurallambda
neurallambda (open agi)
17 days
AMD released a 135M sized model, trained on 690B tokens: It's not great, but it's tiny! Is this the smallest LLM release?
19
51
315
@neurallambda
neurallambda (open agi)
2 months
@realGeorgeHotz he reached out to them! recall, he went on that long tour of the globe talking to heads of state begging for regulatory capture
6
0
126
@neurallambda
neurallambda (open agi)
2 months
@fchollet is this an indictment or endorsement?
1
0
89
@neurallambda
neurallambda (open agi)
2 months
@vikhyatk culture of excellence, harsh realities, great outcomes
1
1
89
@neurallambda
neurallambda (open agi)
9 days
@wildbarestepf classical liberal values: - property rights - freedom of speech/expression - individual liberty - equality under the law - a law evenly and justly applied - consent of the governed - religious tolerance when did these evaporate?
8
3
69
@neurallambda
neurallambda (open agi)
6 months
@VictorTaelin Claude Opus just got it on the first try (pasted GH gist verbatim) But I agree with the spirit of this, and I'm adding string/graph rewriting to
Tweet media one
4
1
63
@neurallambda
neurallambda (open agi)
5 months
Sam @|tman really said this... to a journalist?! edit: hot dawg, op got taken down. was link to
Tweet media one
4
9
50
@neurallambda
neurallambda (open agi)
12 days
@fchollet haha, i made this point yesterday :D and ya, with RWKV, xLSTM, and SSM, it seems transformers' time on the earth is limited
@neurallambda
neurallambda (open agi)
13 days
@pmddomingos 1. do RNNs for a while 2. add attention 3. Attention is all you need 4. Drop RNNs, do Transformers 5. add serial reasoning 6. serial reasoning is all you need 7. do RNNs again (secret 8th step, add homoiconicity)
0
1
35
3
1
37
@neurallambda
neurallambda (open agi)
13 days
@pmddomingos 1. do RNNs for a while 2. add attention 3. Attention is all you need 4. Drop RNNs, do Transformers 5. add serial reasoning 6. serial reasoning is all you need 7. do RNNs again (secret 8th step, add homoiconicity)
0
1
35
@neurallambda
neurallambda (open agi)
7 months
Can your AI / LLM compile programs? If not it cannot Reason. ~* Lambda calculus in the latent space *~
Tweet media one
0
4
32
@neurallambda
neurallambda (open agi)
4 months
@rohanpaul_ai @burny_tech They gave it up too easily, could've been a 14pg paper
2
0
32
@neurallambda
neurallambda (open agi)
3 months
If you plot a histogram of the hidden activations of an LLM (qwen 1.5B in this case), they're indistinguishable from a normal distribution not sure what to make of that, feels like an AGI though would have weirder distributions tho (at least multimodal?)
Tweet media one
10
2
29
@neurallambda
neurallambda (open agi)
1 month
@Liu_eroteme its homoiconic llms, huh? and i'm not being flippant, that's basically my same list, and it's also a profoundly simple idea
5
0
29
@neurallambda
neurallambda (open agi)
1 month
@ESYudkowsky not so dire, we'll have a web of trust, and digital signatures. You may not know if a video is real, but you can know if other people with reputation vouch for it. reputation there is key, the future is human!
9
0
28
@neurallambda
neurallambda (open agi)
10 days
followup: you can add information to a NN without training using a LoRA technique directly, but you have to be careful to scale the new low-rank weights appropriately the previous version essentially scaled it at 1.0, but empirically, scaling around 1/sqrt(dim) looks more
Tweet media one
@neurallambda
neurallambda (open agi)
10 days
Here's an interesting, simple experiment in methodically updating the weights of an MLP/FFNN to contain new information, directly, without training. The MLP is a simple `y=down(up(x).relu())` So say we want to store k and v, so that any time the network has an input like k, it
Tweet media one
1
0
13
4
2
26
@neurallambda
neurallambda (open agi)
6 months
@tsarnick meta: i don't think we need to solve this. we just need local agi in everyones' control, and via the marketplace of decisions, people will "vote" on the important values.
5
0
24
@neurallambda
neurallambda (open agi)
2 months
@AISafetyMemes both sides are speaking past each other LLMs can "reason" by interpolating their training (which has human reasoning baked into it) "Reasoning" though typically means principle-based processing that allow you to extrapolate
3
0
23
@neurallambda
neurallambda (open agi)
1 month
homoiconic ai update: i've finally got everything wired together and training! inp: "Once upon a time, ^Q||^K||^V||, cool!" QKV are metatokens, and the next 2 embeddings (|) get interpreted as low rank matrices that modify the linear weights of the underlying transfrmr itself
Tweet media one
3
1
23
@neurallambda
neurallambda (open agi)
2 months
progress report on "Homoiconic AI": we use a hypernet to generate the weights of an autoencoder, and then do in-context learning (masked reconstruction loss) to improve those weights val loss is 0.05 vs 0.1, so, the "homoiconic" version is doing interesting things LLMs next,
Tweet media one
1
2
23
@neurallambda
neurallambda (open agi)
3 months
@d0rkph0enix Funny, but X really needs a QR signature on posts that verify if the user actually said that
2
0
21
@neurallambda
neurallambda (open agi)
2 months
AGI = homoiconic LLM (model can input/output its own weight updates)
7
0
21
@neurallambda
neurallambda (open agi)
2 months
my dear chat, let me show you the neural magic called "test time training" you can do SGD within SGD! it lets your model do gradient descent on the *current context window*, like an instantaneous finetune, that the outer loop can harness tinydemo:
Tweet media one
Tweet media two
3
4
19
@neurallambda
neurallambda (open agi)
2 months
@BernardJBaars Attention schema theory explains it well - attention is an unconscious process for filtering stimuli - recursive modeling of that process is awareness. AST sets awareness and consciousness as equivalent I suspect working mem doesn't get enough credit in this formulation
1
0
20
@neurallambda
neurallambda (open agi)
4 months
@tsarnick says the guy who backdoored twitter for the last 5-10 years, so we could be force fed fake content, and not know what's real anymore? turns out he actually *is* an authority on this topic
2
0
19
@neurallambda
neurallambda (open agi)
4 months
Major ML architectures (what am i missing?): - MLP - Autoencoder - CNN - RNN (LSTM, GRU, NTM, DNC, SSM) - GAN - Transformer - MLP Mixer - Relation Nets - Pointer Nets - Graph Nets
4
1
19
@neurallambda
neurallambda (open agi)
3 months
Does the Llama 3 paper say why the architecture is sooo vanilla? (merely basic transformers!?) It's amazing how many architecture innovations Meta has published, but then they chose to go the super simple route. Whyyy??
8
0
19
@neurallambda
neurallambda (open agi)
4 months
@sporadicalia Think about that golden capstone. It wasn't repurposed or melted down, it couldn't have been, too important, and if it were stone, not a useful geometry. It still exists somewhere, perhaps in someone's private mansion museum.
8
0
18
@neurallambda
neurallambda (open agi)
3 months
Follow up: LLM hidden activations look to be gaussian noise, but then when I project my dataset's 10k*1536 dim vectors to 2D, you get interesting structure within and across layers. layer 4 looks like cell apoptosis i'll do LLE projections tomorrow
Tweet media one
4
2
18
@neurallambda
neurallambda (open agi)
6 months
@RobertMSterling like the time they withheld licenses until spacex could prove the effects of rocket noises on seal libido (not a joke)
Tweet media one
2
0
17
@neurallambda
neurallambda (open agi)
1 month
some high profile ppl have said recently that all finite Turing Machines are encodable as Finite State Machines, therefore they're equivalent obviously but this misses the enormous point that they scale differently with time and problem size this really matters for AI vs AGI🧵
4
1
17
@neurallambda
neurallambda (open agi)
5 months
do ppl like lame progress updates? I have a novel neuralstack architecture that extrapolates incredibly and extended the method to a neuralqueue... it learns to always only dequeue, and to be super uncertain about what token to output :(
Tweet media one
4
1
16
@neurallambda
neurallambda (open agi)
5 months
can an AI learn to reason? heres an incremental improvement in the world of neuralstacks. the NN learns to solve a problem, and simultaneously use a neuralstack. the kicker is the test set is 2x longer and uses toks the NN has *never* seen i'll cont to prove this is reasoning
Tweet media one
0
2
17
@neurallambda
neurallambda (open agi)
15 days
@Yuchenj_UW looks like it sorta completed the task anyway - offensive ✅ - the joke? openai ✅
1
0
16
@neurallambda
neurallambda (open agi)
4 months
@VictorTaelin alignment research should not be about how to control the AI, but how to find win-wins in a world where everyone's empowered by AI, and AI will help us navigate that game theoretic landscape. The alt, top-down tyrannical control, is the only threat you should be modelling
1
0
15
@neurallambda
neurallambda (open agi)
8 days
@vikhyatk "It appears this photo has a man or a woman, with background details that don't actually exist, which really evokes a mood that's irrelevant."
1
0
16
@neurallambda
neurallambda (open agi)
2 months
@AISafetyMemes humans process in 2 modes: habit, and conscious the conscious mode allows you to process things more principles based, so I agree, AGI will be doing what humans do but AI currently is more like habit-mode, miming things that are statistically correlated both have a place btw
0
1
15
@neurallambda
neurallambda (open agi)
9 days
1
1
15
@neurallambda
neurallambda (open agi)
6 months
@VictorTaelin I won't develop a GPT prompt for this bc non-infinite GPTs will never solve the general case of this, but I will develop an AI for this (it's what we're working on)
1
0
14
@neurallambda
neurallambda (open agi)
11 days
@fchollet money supply inflation, a signal of a strong economy??
0
0
15
@neurallambda
neurallambda (open agi)
3 months
A 451 page tour of Differentiable Programming, we've got our work cut out for us ;)
Tweet media one
3
1
14
@neurallambda
neurallambda (open agi)
1 month
Homoiconic AI update this project is about allowing a network to generate/execute *its own weights* early experiments were promising so now I'm threading this ability through an LLM to give a taste heres an MLP block where I'm allowing generated low-rank Ws to affect fwd pass
Tweet media one
0
1
14
@neurallambda
neurallambda (open agi)
2 months
@francoisfleuret my oss agi project's definition: "R. is the ability to derive true (or self-consistent) statements that you never learned. This is accomplished by processing principles instead of evidence." Test this in the small by holding out known truths, and seeing if they can be derived
0
1
14
@neurallambda
neurallambda (open agi)
4 months
Could traditional gears eliminate backlash by having the teeth be flextures?!
@asingleoat
oat
11 months
high precision mechanisms from low precision parts via elastic averaging, the Chinese remainder theorem, and the linearity of springs under small displacements
Tweet media one
12
9
364
1
1
13
@neurallambda
neurallambda (open agi)
2 months
ScienceGPT is getting trained on 30T science tokens!? This will be huge. I know there have been smaller attempts so far, on like the Arxiv stack. Does anyone know of a Llama or Mistral that's been "science tuned"?
@koltregaskes
Kol Tregaskes
2 months
What is ScienceGPT? Formerly called AuroraGPT, ScienceGPT, a planned one-trillion-parameter AI model, aims to revolutionize scientific research in fields like biology and climate science. Developed at Argonne National Laboratory, it's training on 30T tokens of data using the
Tweet media one
4
5
34
2
1
13
@neurallambda
neurallambda (open agi)
10 days
Here's an interesting, simple experiment in methodically updating the weights of an MLP/FFNN to contain new information, directly, without training. The MLP is a simple `y=down(up(x).relu())` So say we want to store k and v, so that any time the network has an input like k, it
Tweet media one
1
0
13
@neurallambda
neurallambda (open agi)
6 months
@deliprao It's also illegal to distribute weights that can be *fine tuned* into causing harm. Don't get me in trouble for this: torch.randn(1024, 1024)
0
1
13
@neurallambda
neurallambda (open agi)
30 days
homoiconic ai progress report: - architecture is wired up (train+inference) - metatokens representing model weights are generated, parsed, and applied as low rank Ws to qwen's linear layers - squashed endless sneaky bugs training is unstable, tryna solve it; latest:
1
0
13
@neurallambda
neurallambda (open agi)
4 months
why am I working on opensource AGI? one reason is that if enough ppl are hyperproductive, war and fighting cost *that much more* and incentives will naturally shift from zero-sum-taking, to positive-sum-building I want 1 trillion humans living throughout the solar system
0
0
11
@neurallambda
neurallambda (open agi)
2 months
@vikhyatk HN has become 50% goof balls
1
0
12
@neurallambda
neurallambda (open agi)
5 months
this post was originally a quote, but X upgraded it to a top-level post (apparently), op got taken down, and replies got evaporated. strange.
1
0
12
@neurallambda
neurallambda (open agi)
2 months
@MikeBenzCyber how does this not erase their tourism industry completely? no one is safe in UK
1
1
12
@neurallambda
neurallambda (open agi)
3 months
Are any opensource LLMs trained with pause tokens? "pause/thinking tokens" allow an LLM to "think" about a problem without getting penalized in the loss. LLMs have to predict the next token, but if trained w pause toks, they can think for a bit before emitting the next tok.
3
0
12
@neurallambda
neurallambda (open agi)
4 months
@tsarnick but also, a lot of humans who have ever lived are alive today, and because of exponential technological growth, a lot of humans who came before were also just in time for some huge innovation so in another sense, not sooo crazy that we should be here for this one
3
0
12
@neurallambda
neurallambda (open agi)
2 months
I never needed "Projects" before in Sonnet, but, super grateful for it today I was studying the `torch.fx` api, for which there is only sparse data and code in the wild, and it's pretty under-documented Sonnet started out giving crap answers "Projects" is like prompt
Tweet media one
3
0
12
@neurallambda
neurallambda (open agi)
7 days
Hopfield nets were the past, but I suspect they could also be the future They can hold tremendous amounts of data, and be updated online. This would be a pretty cool architecture to simulate hippocampal short-term learning (and may be how the hippocampus actually works).
1
0
13
@neurallambda
neurallambda (open agi)
22 days
"Turing Machines are equivalent to Finite State Machines in non infinite variants" Bc this matters to Reasoning AI, id like to correct this 1. A program to calculate the next prime is the same sized program no matter the input. An FSM grows super exponentially for larger inputs
1
0
12
@neurallambda
neurallambda (open agi)
29 days
followup: I was trying to eat the whole elephant I used a dataset that used very large samples, including ~200 metatokens per sample, and I masked out all loss except for the final 2 tokens, hoping for that signal to backprop through... everything with smaller samples, and
@neurallambda
neurallambda (open agi)
29 days
Can you help me stabilize and speed up training? The following piece of my architecture is pretty sensitive to: - initialization - learning rate - batch size - data quantity - position/usage of LayerNorm in the module All I need is a stable way of projecting a vector to a
Tweet media one
2
1
8
0
0
12
@neurallambda
neurallambda (open agi)
2 months
"AI can't plan" I'm hot on metalearning rn, and it seems to me like "planning" could just be iterating until the inner-loop training reaches a satisfactory loss. That's planning, or similarly, search. yeah? In a transformer, just spill tokens until the loss is small enuf
2
0
12
@neurallambda
neurallambda (open agi)
3 months
@ylecun literally who is against legal immigration? Yann, grateful for you, but, cmon
1
0
11
@neurallambda
neurallambda (open agi)
2 months
amazing! I wrote about this exact example in glad to see it's solved! Now, make it a multi hop prompt: "Take the uniform one needs to traverse the vacuum of space, and have someone wearing that. That person is conveying, on the opposite side of their
@cocktailpeanut
cocktail peanut
2 months
"Horse riding on top of an astronaut", finally made possible with FLUX.
Tweet media one
12
57
401
1
1
11
@neurallambda
neurallambda (open agi)
1 month
Tweet media one
1
0
11
@neurallambda
neurallambda (open agi)
3 months
progress report: I've got near perfect accuracy on a held-out ARC-like (1D) puzzle. It's simple stuff, just, pixel translation, and the translation distance is what was held out. Smooth loss, validation acc hugs training acc. Simple stuff, but feels good
Tweet media one
2
2
11
@neurallambda
neurallambda (open agi)
3 months
ARC-like synth data, but 1D inspired by: pic shows 1D input-output pairs interleaved if u want to collab on adding tasks i'll add u, and if ppl like this, I can opensource
Tweet media one
3
1
11
@neurallambda
neurallambda (open agi)
2 months
@nearcyan i've been solely focused on creating AGI for >decade that's starting to get interesting that count? i've opensourced it to get more people involved, and will have a v significant update in abt a week, where i think ppl will start to want to use it
0
0
11
@neurallambda
neurallambda (open agi)
6 months
what does "reasoning" mean wrt AI? imo "reasoning" is the ability to know true things without having learned them. it is building knowledge/predictions/retrodictions/actions atop *principles*, instead of *evidence*.
2
1
11
@neurallambda
neurallambda (open agi)
4 months
@tsarnick Kinda hillarious that we didn't get the cold, factual c3p0 We got the einstein high on shrooms
11
0
10
@neurallambda
neurallambda (open agi)
16 days
Mind upload technique, GAN of 2 networks: - Generative "you" network (agi) acts like you - Discriminator network (agi) tries to distinguish bio vs silicon "you" - loss function reduces discrepancy between 2 yous If there's no distinction between silicon/bio you, you're ul'd!
2
0
11
@neurallambda
neurallambda (open agi)
1 month
@Allsdolllapp @Liu_eroteme what is recursive prompting? you mean like a human-in-the-loop? Or letting an LLM recursively prompt itself? "homoiconic" is a term of art from the world of lisp that observes that the code of a program is an AST, and you can write programs to manipulate ASTs, and it's the same
2
0
11
@neurallambda
neurallambda (open agi)
2 months
i feel like i shoulda known about this fn sooner
Tweet media one
1
0
10
@neurallambda
neurallambda (open agi)
17 days
@Dorialexander cool, base & instruct! thank you!
0
1
10
@neurallambda
neurallambda (open agi)
1 month
anyone else's evenings going as well as mine? (this is the homoiconic llm stuff on a tough toy problem)
Tweet media one
3
0
10
@neurallambda
neurallambda (open agi)
2 months
whos stitching together voice-2-voice? eg: whisper->llama->some STT (not sure whats best) i'd think with a couple simple adapters, you could do this
2
0
9
@neurallambda
neurallambda (open agi)
1 month
bad idea of the day: when developing ai archs, I like using the interpreter a lot, but then if some state is wrapped in a fn, I have to sprinkle `breakpoint()` everywhere to check in on the state of vars not anymore, just jam everything in the global namespace
Tweet media one
2
0
10
@neurallambda
neurallambda (open agi)
6 months
There are so many models of computation: And so many kinds of turing machines: Are we really giving up this line of interrogation after Neural Turing Machines/Diffable Neural Computers failed to scale well?
1
2
9
@neurallambda
neurallambda (open agi)
3 months
@Yampeleg I like pointing out that the brain runs on 20W (~LED light) to run 100hz computations on incredibly redundant wetware. Vs silicon chips run at 5ghz in a completely noise free fashion, so less need for redundancy. We'll be inferencing, possibly training AGI on a phone
1
0
9
@neurallambda
neurallambda (open agi)
1 month
this "homoiconic ai" thing feels like "neural tool use", except the "tool" is an AI architecture; in this case, over the model's own weights "neural tool use" is i think my own term, and refers to e2e diff'able architectures that use tools, ie a neural net
0
0
10
@neurallambda
neurallambda (open agi)
2 months
I'm doing metalearning/test-time-training/SGD-within-SGD, and made a weird observation (gist: ) If you do SGD-within-SGD, you presumably repeat the weights so they can be trained on a per-batch-item basis. This costs memory + time ofc. I'm finding that
2
0
10
@neurallambda
neurallambda (open agi)
4 months
Can AI reason? SD3 can't follow the simplest logical `NOT` operator LLMs fake it better when they're interpolating well covered domains, prevalent during training My mission attempts to solve this by making the steps of reasoning differentiable. plz continue to wish me luck
Tweet media one
Tweet media two
2
0
10
@neurallambda
neurallambda (open agi)
2 months
hypothesis: AGI could be merely a multimodal llm, with a new mode that can READ/WRITE WEIGHTS merely as input/output embeddings and in the sense of hypernetworks, those weights can be applied to the inputs and also introspected ofc - they can be read or written 🧵1/n
1
0
10
@neurallambda
neurallambda (open agi)
12 days
A theoretical free market needs: - infinite supply - infinite demand - perfect information Since we don't have infinite anything, government aids the free market by breaking up monopolies - ie product supply isn't allowed to have a monopoly. Unions make sense as a tool to aid
1
0
10
@neurallambda
neurallambda (open agi)
4 months
@fchollet The proper definition of GI considers *principled* reasoning instead of *evidence-based*. Ex: F=ma represents principles that allow you to reason far outside the dataset Ex2: "faith can heal" is also a principled view neither has to be true, but you need symbolic reasoning
0
0
9
@neurallambda
neurallambda (open agi)
12 days
@pmddomingos they're into control too: but i mean, 80% of controlling a system is modelling it well first
1
0
8
@neurallambda
neurallambda (open agi)
6 months
instead of banning AI, why not just ban crime? "it can be used for fraud". ok, let's ban fraud, not AI. "it can be used to make WMDs." ok, let's ban making WMDs. (but also, that info is available online, so...) "it can be used to manipulate elections." ok, let's ban *that*.
1
1
8
@neurallambda
neurallambda (open agi)
1 month
Here's the toy dataset I'm starting off my Homoiconic AI on AI struggles with variable indirection/"multi hop reasoning", so this should be a fun test If anyone wants to play along, plz train your own arch against it!
Tweet media one
2
0
9
@neurallambda
neurallambda (open agi)
2 months
@fchollet my man, did you read your article?
Tweet media one
0
0
9
@neurallambda
neurallambda (open agi)
2 months
@cloneofsimo metalearning/test time training is brilliant and and clever. Underneath it all, it's a very simple proposition: do SGD within SGD that is do SGD at test time within SGD at train time posted a short gist earlier actually
@neurallambda
neurallambda (open agi)
2 months
my dear chat, let me show you the neural magic called "test time training" you can do SGD within SGD! it lets your model do gradient descent on the *current context window*, like an instantaneous finetune, that the outer loop can harness tinydemo:
Tweet media one
Tweet media two
3
4
19
1
0
8
@neurallambda
neurallambda (open agi)
4 months
Transformer vs Neurallambda on a difficult toy problem; how much data is needed? NLam: learns on extremely few examples, eg 20(!), and generalizes phenomenally. Train (dotted) and Test (solid) lines hug each other T: memorizes small training set ok. Never generalizes.
Tweet media one
4
0
9
@neurallambda
neurallambda (open agi)
12 days
@finbarrtimbers are you saying it doesn't take gigawatts of energy to raise a human intelligence?
2
1
9
@neurallambda
neurallambda (open agi)
3 months
Follow up: while LLM latent activations seem to follow a gaussian distribution, there is apparently more structure when you project to 2D (tSNE/LLE) this, the 1st layer, seems considerably more complex than downstream layers, i'll follow up l8r orig:
Tweet media one
@neurallambda
neurallambda (open agi)
3 months
If you plot a histogram of the hidden activations of an LLM (qwen 1.5B in this case), they're indistinguishable from a normal distribution not sure what to make of that, feels like an AGI though would have weirder distributions tho (at least multimodal?)
Tweet media one
10
2
29
3
1
9
@neurallambda
neurallambda (open agi)
3 months
@dancer_co post frames with commentary or report on it, should fall under "fair use"
0
0
9
@neurallambda
neurallambda (open agi)
2 months
@stephen_wolfram "Toy Models of Superposition" and this seem related
Tweet media one
1
2
10
@neurallambda
neurallambda (open agi)
8 days
@VictorTaelin reputation will be a major currency going forward
1
0
9
@neurallambda
neurallambda (open agi)
6 months
@SydSteyerhart i went through the same phase, now i think AI is the new UBI. opensource, local AI levels the playingfields of intelligence and productivity, and so accomplishes what UBI might be (charitably) intended to, but relationships remain consensual instead of compulsory
1
0
9
@neurallambda
neurallambda (open agi)
3 months
Hey editor geeks, what are your favorite tools? Mine (in emacs, but im curious regardless of editor): - avy-jump (jump to any char) - multiple-cursors - uniteai (llm in editor) - in-editor python interpreter (run scripts, state is preserved/inspectable/rerunnable/mutateable)
3
0
9
@neurallambda
neurallambda (open agi)
2 months
@svpino no one's defined reasoning, so please allow me: "Reasoning is the ability to know true things that you have never learned. It is done by processing principles instead of evidence." so no, LLMs have no principles, they're more like freestyle rappers saying what feels right
3
2
9
@neurallambda
neurallambda (open agi)
3 months
At its core, Neurallambda is about making these differentiable: * datastructures (eg queues, trees, lambda calc) * operations on them (eg push, swap, beta reduce) This yields highly "interpretable" AI; it learns "programs" that we parse back out into human-readable form
5
0
9
@neurallambda
neurallambda (open agi)
11 days
a 7B block recurrent mlp mixer would fix me
1
0
9
@neurallambda
neurallambda (open agi)
2 months
How are people's weekends kicking off? I'm still crunching on the "Homoiconic AI" thing (an architecture for reasoning) and working on: - do SGD within SGD (metalearning) - generate a network's weights from another network (hypernetworks) - to stitch that together, I'm using
1
0
9