@VitalikButerin
How about we actually look at the technical merits of nChain's proposals rather then turn this into a pissing contest between personalities?
@Justin_Bons
@DesheShai
Can I offer an argument that turing complete (with blockchain as the tape) contracts are suboptimal?
"blind signing" is a big symptom of the problem.
Transactions should state their ledger state modifications in a non-turing complete and simple manner, so wallets can tell user
@DesheShai
You are right, that lead was too strong. Sorry about that. As I qualified at the end, I just did a quick read of the whitepaper. But looking at DAG chains in general, either they don't form consensus on the ledger (vs. order of msgs), or they reduce to a blockchain.
@DesheShai
But a proof of chain membership (of a tx presumably) does not prove tx validity in distributed time stamp systems (whereas they do in satoshi-style blockchains) because there may be an earlier tx that invalidates this one.
My favorite part of BTC segwit/taproot is how it gives onchain data storage (ordinal NFTs) a 75% fee discount, and barely increases monetary transaction capacity with the result that for every 1MB of tx data, there's 3mb of garbage.
@DesheShai
"may happen after subsequent spends" is perhaps most important. Its a disaster to unwind. The diverged coins may be mixed with good coins poisoning them and passing through many hands (all on the buggy client) before discovery. Much better to split or reject than to mix.
@DesheShai
I have only read the kas whitepaper (and stated that at the time) so I kept my comments deliberately general, and focused on BCH SLP which I know more about. However, my concern is not about anything you mention above. Its about what happens in the face of client bugs. (cont)
@DesheShai
With miner consensus the failure is obvious. The chain forks or the client stops following the tip. In client consensus models, all clients follow the same chain even tho not in consensus. It's nontrivial for people to discover the problem & may happen after subsequent spends
@DesheShai
P.S. i think you need to tone the rhetoric down a notch and focus on the tech. I am making no anti-kas stmt because: 1. I am no kas expert. 2. Many things are tradeoffs. 3. We dont need to fight eachother
I'm saying why Nexa does not use DAG chains, then twitter happens...
@DesheShai
You must rely on a benevolent oracle to provide a proof of the preceding tx, or to say that there is none. One solution is for the system to periodically commit to a summary valid previous tx, but that likely collapses your DAG into a chain, or undermines purpose of DAG...
@DesheShai
alright reading PHANTOM... so you've clearly given away SPV proofs, even if "prob that GHOSTDAG's order between tx1 and tx2 changes... decreases exponentially". I can still provide a proof of tx2, and there are not efficient non-existence proofs since tx order in block arbitrary
@DesheShai
Now let's consider bugs in your clients. Paper says "PHANTOM achieves consensus on the order of blocks, and this guarantees agreement on the set of accepted transactions as well." NOT TRUE! Its only true for bug-for-bug compatible clients.
@DesheShai
You achieve *ordering consensus* but bitcoin et. al. achieves *ledger consensus* & buggy nodes that disagree fork off or get stuck. This is client consensus (of the ledger). [Aside: BTC white paper is inconsistent on this which is the subject of my 1st lecture at UMASS].
@DesheShai
So that means full nodes only. But then who cares about scale because how many payments u make from your laptop? Or phone wallets that trust a full node. But this is a censorship vector since the wallet probably hits its creator's set of nodes which can be shut down.
@DesheShai
But both accept that chain. In systems like SLP, and BTC ordinals, this situation can persist so long as spends of A and B stay within the same client software. I'd guess that this subsequent issue is not a problem in kas. Another issue is SPV proofs.
@DesheShai
Reading sec 3.5 "main result", I would not agree with that def of scalability "safe to increase block creation rate without compromising security". Scalability is how resource use changes as a function of # transactions.
BU and I were some of the strongest voices against the looting that's named "the IFP" the first time around, and so it should be obvious that we will not accept it now.
@DesheShai
In BTC et. al. a bug resulting in a consensus difference causes a hard fork. In client consensus models, there is no immediate hard fork because invalid state is allowed onchain. So given 2 tx, A and B one client thinks A is valid and the other thinks B. One is buggy (cont).
@DesheShai
So another tradeoff. How deeply does KAS go into client consensus? Can you spend doublespends? Can you include illegal spends and then "spend" those? (BCH SLP was full client consensus, the results were not pretty...) But hoping KAS differs from full client consensus...
@jeff_foust
...a carefully worded statement giving the public absolutely no real data. This prevents us from making an assessment, which is perhaps the point.
@mononautical
The problem is more likely the client mistaking the value of the coins coming in. Modern chains like Nexa solve this by making the transaction commit to or explicitly specify the input amounts.
If BTC would hard fork, this and a bunch of other small things could be cleaned up.
@DesheShai
I disagree that it must be qualitative. You can model a blockchain's latency, scalability, resource usage, and security as a function of inputs like network latency, hash power, etc.
@DesheShai
hmm... I would call that latency. The time from transaction broadcast until the energy to undo is > X.
If a blockchain with a single 10 GB block per day has high scalability...
@OriNewman
@DesheShai
I'm aware. There are 2 reasoning paths from here. 1. explore indistinguishability (by light clients) of normal DAG branches from forks. 2. recognize the portion of PHANTOM that's miner consensus (most of it?) and the portion that's client (tx validity due to prior spends).
You will look back on ordinal/BRC-20 as a canary. Its the first time the tail (ETH) wagged the dog (BTC). But it will not challenge other token systems because of vulnerabilities: silent (client consensus) forks and indistinguishable outputs. Its benefit is only co-loc on BTC
@DesheShai
Security is how much energy to change? Or in a blockchain context, how many hashes are downstream of a transaction.
My intuition is all these sibling blocks must result in more resource use per tx and fewer or same downstream hashes.
@DetiEth
@ercwl
You know this because you and he are best buddys? /s We did not exploit for moral reasons and because destroying confidence in the largest crypto would undermine the entire space.
2016 document that proposes replacing sigop counting with the simpler and more accurate method of input length: . Would have avoided BCH bug if deployed.
@_unwriter
getrawtransaction takes the cs_main lock, so its serialized with itself and lots of other operations. But since its read-only, it should be possible to remove that lock. I will add getrawtransaction to our stress tests and optimize.
@yhaiyang
stop making excuses for the BCH "leadership" that has deployed whatever hard fork features it wanted (and more importantly *NOT deployed* other features) regardless of community feedback.
@bchautist
@Justin_Bons
@DesheShai
Its interesting that a non-turing complete program can verify a turing complete one, if called repeatedly. So we've isolated the single operation (1 potentially infinite loop) that differentiates.
rereading "The Cathedral and the Bazaar" its clear that IFP is the culmination of ABC's effort to bulldoze the bazaar and build a cathedral dev environment on top of the bodies.
@zawy3
@DesheShai
You can estimate the HR (assuming you mean hash rate) by publishing near solutions (weak blocks) or even just their headers. Don't need a DAG or consensus change for that. Just need more samples...
@OriNewman
@DesheShai
As I tweeted previously, PHANTOM comes to consensus on the order of messages (and maybe some properties). It does not come to consensus on the ledger b/c doublespends ok in validated blocks. Each client must look for prior spends to determine if tx updates ledger.
@ercwl
We (Bitcoin Unlimited) were testing interoperability with ABC (2 implementations of Bitcoin Cash) in rare edge cases when this bug caused a divergence because it was triggered in ABC. Awemany traced the code history back to Core.
@DesheShai
Which leads to the last thing: paper doesn't even address "null hypothesis". It needs to show a wide DAG is better in those metrics (uses fewer resources, transactions accrue security faster) than a narrow one (e.g. rarely branching tree AKA blockchain). Maybe it does? IDK
@CobraBitcoin
But why intervene? its so fun to watch the core devs prove that they don't understand the system they coopted, with more layers of cruft over hacks and bandaids to control a permissionless network and stop a fancy-word redo of a decade old, painfully limited token technology.
@robustus
There's 2 strategies: regulation is a form of precrime, anticipating and preventing possible crimes, and enforcement is punishing transgressors (which is itself a deterrent). It feels like these 2 strategies are applied unevenly and recently to EXACTLY the wrong companies.
@SonOfATech
@NexaMoney
Fix is released! Quick summary: some nodes hit a code path where a new script feature was simply not turned "on" (bug). Use of the feature caused a fork with a minority of nodes rejecting the tx that used the new feature.
@deadalnix
@gavinandresen
Yet BU, not Classic or XT, succeeded in getting 50% BTC hash, proposed the Bitcoin Cash hard fork, and delivered SW (with ABC). Imperfection happens when working with the code for only a month. After a year+ of study you made ABC and already have a hard fork bug.
@CalvinAyre
But "your side" also wants to remove DSV because it can be implemented with the current opcodes. So I guess we better start removing those opcodes too.
#MutuallyDestructivePropaganda
@dgenr818
How is the data in OP_RETURN somehow more special than the data in the transaction, so that they confirm transfer of ownership but the transaction data does not?
The problem isnt unregulated crypto. The problem is the transition to DYOR. We saw this same transition in 2008. People trusted the banks' judgement on whether they were good for a mortgage. But invention of MBS meant banks no longer cared. So lots of personal bankruptcies.
@ccatalini
@gavinandresen
You've bought into an unproven core narrative. Its very possible that BCH will be MORE decentralized. Who wants to run a full node if only banks can afford to participate? On the other hand, a reasonable home computer and connection can keep up with BCH easily.
@SonOfATech
@NexaMoney
Ironically this is hard to test -- our test chains independently control feature activation so the feature and activation itself can be tested before mainnet activation. So the mainnet *activation itself* can only be tested on mainnet.
@pokkst
@be_cashy
the even crazier thing: unlike another time some of us remember, this majority pool isn't even doing anything funny. it's simply guilty of not siding with the "official" nodes when an orphan emerged.
@NicerInPerson
@zawy3
@danieleripoll
Yes total revenue per block is important, but entirely discounting hash is a mistake. A lot goes into that. For example, taking advantage of untapped sources of surplus energy is energy that an attacker can't use. And ASIC improvements means attacker must buy the newest equip
@ari_cryptonized
@deadalnix
@gavinandresen
After BU paves a nice highway through a warzone, you are welcome to drive your car down it for a picnic and complain that the craters mar the scenery.
@KanevLubo
@BitcoinUnlimit
Miner validated, efficient-membership proof (SPV capable), UTXO commitments are very valuable. Important for scaling, fast sync, to efficiently prove recent UTXO ownership extra-blockchain, and even to supply such a proof to a script.
@zawy3
@NicerInPerson
@danieleripoll
As BTC grows to fill sources of surplus energy, these attacks become much more expensive and noticable because you'd have to redirect energy from other uses. And if your new HW is X% more efficient, do you even have 100 + X dark obsolete HW to deploy?
@deadalnix
@vinarmani
Bitcoin unlimited went green in our qt splash over a year ago, everyone is slowly moving over to a look that is differentiated from BTC, and is associated with money rather than gold.
@Horace34687264
@im_uname
@freeAgent85
@pokkst
@be_cashy
That's a 12 block reorganization, so if the network was "in consensus", all nodes would have ignored this fork due to the finalization rule. Yet there are reports that many nodes followed the fork... remember that your node's logs are subjective.
@balajis
We have implemented a proof of concept and a (draft) spec: . Please review! Sorry the formatting is a little rough when viewed in github -- it displays properly once you log into (eating our own dogfood).
@ckpooldev
You know what would be much more awesome? Proving the DoS protection using tx with dense economic activity, and doing so 6 years ago, before crypto tx demand outstripped BTC supply space.
@twobitidiot
Dems should take note; its a VERY dangerous game to make people choose between the likely loss of financial freedom vs possible-but-seems-too-incredible-to-actually-happen loss of democracy.
@clemensley
Why not write a rebuttal? I'd love to read it. People don't tend to tweet or livestream technical discussion because these things require careful thought not quick comebacks.
@DavidShares
Seems like he's pushing payment processors (aka banks) for small value tx. But if you do the math, I'll bet the end result is EVERY individual tx is "small value". Bitcoin becomes bank settlement network.
All the upside of the dollar and all the risk of an experimental technology running novel algorithms within a custodial startup hemorrhaging its seed money. What about these systems was attractive? What about them have anything to do with cryptocurrency?