![KOVIN SEGNOCCHI Profile](https://pbs.twimg.com/profile_images/1786672032626728960/iP-W3yX2_x96.jpg)
KOVIN SEGNOCCHI
@kovinsegnocchi
Followers
3K
Following
1K
Statuses
664
$KOVIN Segnocchi leike abbax gud tech CA : 0x694200a68B18232916353250955bE220e88c5cBB https://t.co/c4Syveg9TN
Joined August 2023
RT @kovinsegnocchi: Congrats to all the winners ๐ : @nidudasi27
@Toyoavax9000
@Nd_Nobody_
@RoypiTw3
@d_iam_ace
0
4
0
idk what this means, but kevin is a gigabrainz so i gotta trust him
This is assuming no external fundamental changes (which could happen, for example some fancy new type of leader architecture that compresses txs some way)! Under a standard BFT implementation, then a basic calculation of simple transfers is base tx + signature + metadata + instruction data which should be around 250-ish bytes (correct me if this has changed recently) which will yield a theoretical maximum sustained processing at around ~ six figure TPS (at 1Gbps network), but that is assuming absolutely no consensus or even storing the actual data. If you include that, it would (empirically based on internal tests we've also run) collapse that by about 1/10th, so it'll put in the five figure range performance of sustained active database changes (only simple transfers). For more complex txs, it'll be lower. This isn't to say that it can't go to 1m tps, it just needs a fundamental different set of assumptions. For example, push everything through one leader that will compress data and be ahead of all replicas.
5
10
24