hallerite Profile Banner
hallerite Profile
hallerite

@hallerite

Followers
529
Following
830
Statuses
566

sampling latent space to forge κλέος ἄφθιτον

a place in the sun
Joined December 2024
Don't wanna be here? Send us removal request.
@hallerite
hallerite
1 month
2025 will be the year in which I will - write my first conference paper - graduate - start learning mandarin or arabic - improve my spanish, russian and french enough to get by - join the 1000lbs club for the big 3 - improve my boxing - finally develop my cardio enough to run 10km without being out of breath - meet lots of cool people and try out new things grateful for what has been, excited for what's to come.
1
0
16
@hallerite
hallerite
48 minutes
@max_paperclips excuse me, but the money printing will have to stop.
Tweet media one
1
0
1
@hallerite
hallerite
2 hours
0
0
3
@hallerite
hallerite
2 hours
A potential remedy would be to study machine unlearning / forgetting.
0
0
0
@hallerite
hallerite
2 hours
@iamRezaSayar there is no reason to lie / mislead for the sake of Mistral, I mean. Of course he has ulterior motives, too.
1
0
0
@hallerite
hallerite
2 hours
whenever I have to study for robotics exams (last exam soon), I realize how much more I like ML. the iteration speed of bits is just so much higher than that of atoms.
0
0
3
@hallerite
hallerite
3 hours
my understanding is that, if you just look at LLMs, it's obvious they know a lot of facts and concepts quite well — they know certainly much more than any single one human does —, but in some sense they cannot use all of this factual knowledge to find inconsistencies in theories that we have. There are no novel insights made by LLMs yet, whereas I imagine a human with such a breadth of knowledge would almost certainly find flaws in some of the facts he learned. The LLMs lack good epistemics. To them, a token from a bad source is just as good as another from a good source and the more often a concept or truism appears in its training, the more confident it is in that fact. We could say that the LLM lacks good epistemics. Often, if you find a contradiction that it outputs, it will agree with you that it is indeed contradicting another fact it has learned or even pure logic, but such reflection does not happen during training, so any deviation from what it learned during training (= a novel insight) is an error to it.
0
0
0
@hallerite
hallerite
5 hours
@willccbb check dms
0
0
2
@hallerite
hallerite
8 hours
@VictorTaelin travel to antarctica
0
0
2
@hallerite
hallerite
1 day
@gum1h0x check dms
0
0
0
@hallerite
hallerite
2 days
just compare the two for a trivial proof. also, I am sure Lean has nice properties and there are reasons it was designed this way, but man is it harder to learn
Tweet media one
Tweet media two
0
0
1
@hallerite
hallerite
2 days
@giffmana @xlatentspace a true soloq warrior, I can respect that
1
0
1
@hallerite
hallerite
2 days
@giffmana @xlatentspace wanna dm me your friend id and we play some time? 👀
1
0
1
@hallerite
hallerite
2 days
@yacineMTB they are not really the same though. chinese dragons are long
0
0
3