New Ethereum blog post by Dr. Gavin Wood: "Blockchain Scalability: Chain-Fibers Redux"

First of all, it makes me giddy like a school child to read a reasonably detailed description of a scalable blockchain architecture for the first time ever. I am in no way qualified to judge the merit of such a complex cryptocurrency algorithm, but I have to say on a first reading it sounds like a plausible approach that addresses most of the most obvious problems.

However, it's possible I'm misreading some of the core concepts, can other readers let me know if misunderstood any of the essential details, as described below?

Let's say an account address starts with "100110..." binary and another starts with "100111..." if the "N" value is set to 4, then these two accounts would be in the same fiber, as would all other accounts beginning in "1001".

However, a user might send money from account "100110..." to account "011100...", which represent two very very distant fibers- However, there would be a collator somewhere that would maintain the full history of the fiber "1001" and fiber "0111" and has all the information needed to create an "X-Fiber Block", which includes documenting lots of detailed merkle tree info, which allows anyone to validate an X-Fiber Block even if they don't have any of the history of the fibers that are involved.

However, the problem now becomes that there could be THOUSANDS and THOUSANDS of these X-Fiber Blocks created every second, many more than any single computer could possibly validate (less so because of CPU time, and more so that the bandwidth required would be staggeringly huge.) This is where the validators and "fishermen" come in...

...A validator agrees to validate some subset of these X-Fiber Blocks, in exchange for a reward. There is essentially an auction for the validator slots, whereby the people willing to put up the largest deposit get to be validators. However, if a validator "cheats" and doesn't do all the work required to validate their assigned blocks (or even maliciously validates faulty blocks) an outside person (i.e. a fisherman) can capture this deposit, by essentially "pointing a flashlight" at the faulty X-Fiber block, which will make other validators have a look-see as well.

In this manner, it would be probabilistically and economically very difficult (but not theoretically impossible) for a person to get an invalid transaction confirmed (in an invalid X-Fiber block). However, if a person COULD manage this, they could do any number of nasty things, such as give themselves large amounts of money out of thin air.

If this "worst case" scenario happens (and the goal would be that this circumstance NEVER arises, again because of probabilistic and economic factors) then the currency would have to receive a software patch which both "rolls back" the invalid transaction(s) and hopefully puts in place countermeasures for that type of attack never to happen again.

From my own perspective as a novice, here are the main concerns that come to mind:

  1. In the "worst case" scenario (i.e. a horribly faulty block slips through the firshermen's nets) an attacker could receive a large amount of invalid currency all at once, which they could then quickly "smear" across all the fibers with mixing. It seems like it would be very difficult to undo such damage retroactively with a software patch, even if everyone agrees to the faultiness of the block, since the quantities of data involved in such an "infinitely scalable" system are just so huge.

  2. If the invalid blocks get missed by fishermen, is it even conceivable that invalid blocks get missed by EVERYBODY? In other words, could it be that a decade later a researcher analyses the system and says "There's a trillion more ethers in the system than there's supposed to be, and we'll never be able to find out where those ethers came from" (Naturally this supposes that the data has become so large that no single entity can maintain an analyse the full block history of all fibers anymore.)

  3. I worry about the "amplification" caused by the doubling of deposits suggested by the "use validators to validate other validators" mechanism. Let's says somebody finds a small satoshi rounding error bug in the source code that let's them create an ambiguously-valid block somewhere: They might be able to leverage the validation mechanism to multiply this small bug into a million dollar deposit payout by pitting validators against each other.

  4. It's hard for me to figure out the economic interests of collators, and if some people might end up with transactions between distant fibers that simply will never, ever get confirmed.

Anyway, this is really exciting reading, excited to see how Chain-fibers concept progresses!

/r/ethereum Thread Link - blog.ethereum.org