![Filip Jerzy Pizło Profile](https://pbs.twimg.com/profile_images/60893770/IMG_6695-crop-tall-contrast_x96.jpg)
Filip Jerzy Pizło
@filpizlo
Followers
3K
Following
4K
Statuses
7K
Cool fact: even linking in Fil-C is memory safe. If one module says it has an extern of one type, but the symbol is defined with a different type, then uses of the symbol will dynamically check against the defined type. If the mismatch is severe enough that it would have violated safety, then the uses will trap. This works even with both static and dynamic linking (including dlopen).
1
0
32
@Love2Code @ID_AA_Carmack Yeah and that’ll also force you to get the analysis to scale as much as possible while still producing sensible error messages.
0
0
1
@Love2Code @ID_AA_Carmack I agree! It can be a useful thing! I’m just saying you’ll want some static annotations so that: - Folks can zero in on the source of the error. - Value flattening becomes more tractable. - You scale better (not in terms of speed but in terms of not throwing false errors).
1
0
0
Yeah. That’s an example where things work out nicely. But there are so many examples where things work out badly! My favorite is the Air benchmark (still part of JetStream I think), where I rewrote JSC’s stack frame allocation pass in JS precisely enough that I could run both the JS version and the C++ version on the same inputs. The JS version is like 40x slower last I checked.
0
0
2
I’ve made blazing fast global analyses before and seen others also get there. The hard part is making the analysis fast and simultaneously precise enough that it doesn’t turn into a nag fest of false errors for ginormous programs. (This is closely related to the “add types to find the error” issue. Type annotations modularize reasoning, so it lets you build analyses that don’t have to scale because they don’t have to look past any boundary that is annotated. And the scaling I worry about is not how fast the analysis runs, but whether it starts throwing too many errors to be useful once the code gets big.)
0
0
3
@Love2Code @ID_AA_Carmack And I think most of that overhead in practice is - from my experience - the fact that compound types don’t get flattened. It’s easy to flatten them in local data flow, but what happens if they escape to the heap? C benefits greatly from pervasive value flattening in the heap!
1
0
1
Here’s the annoying thing I envision (which also happens in Hindley-Milner): - Programmer gets an error that some value may not be what it should be. - Value originates 10 calls or indirections away. - Programmer wants to find out why, so in ML and similar languages they start sprinkling type annotations to figure out where things went off the rails. Would you allow that?
1
1
5
@Love2Code @ID_AA_Carmack Also - I think the overhead of JS is often due to lack of value types. Value types benefit from static typing because then you know where the programmer wanted the value to be flattened. Would you infer those reliably if you don't have type annotations?
2
0
0
RT @LithuaniaMFA: #OTD in 1863, the people of the Polish-Lithuanian Commonwealth rose against the occupying Russian Empire. The uprising re…
0
175
0