![hallerite Profile](https://pbs.twimg.com/profile_images/1863389983903481856/ztd0YhqX_x96.jpg)
hallerite
@hallerite
Followers
529
Following
830
Statuses
566
sampling latent space to forge κλέος ἄφθιτον
a place in the sun
Joined December 2024
2025 will be the year in which I will - write my first conference paper - graduate - start learning mandarin or arabic - improve my spanish, russian and french enough to get by - join the 1000lbs club for the big 3 - improve my boxing - finally develop my cardio enough to run 10km without being out of breath - meet lots of cool people and try out new things grateful for what has been, excited for what's to come.
1
0
16
@iamRezaSayar there is no reason to lie / mislead for the sake of Mistral, I mean. Of course he has ulterior motives, too.
1
0
0
my understanding is that, if you just look at LLMs, it's obvious they know a lot of facts and concepts quite well — they know certainly much more than any single one human does —, but in some sense they cannot use all of this factual knowledge to find inconsistencies in theories that we have. There are no novel insights made by LLMs yet, whereas I imagine a human with such a breadth of knowledge would almost certainly find flaws in some of the facts he learned. The LLMs lack good epistemics. To them, a token from a bad source is just as good as another from a good source and the more often a concept or truism appears in its training, the more confident it is in that fact. We could say that the LLM lacks good epistemics. Often, if you find a contradiction that it outputs, it will agree with you that it is indeed contradicting another fact it has learned or even pure logic, but such reflection does not happen during training, so any deviation from what it learned during training (= a novel insight) is an error to it.
0
0
0