Yes but there's a limit to how effective the concurrency is because of
Amdahl's Law.
It basically says "The performance speedup in a program is limited by the amount of time that the parallelize-able part uses".
So in this case the methods that verify the transactions inside the block can work in parallel, but there is a host of other things inside that process that can't be sped up:
- Time spent obtaining the block from a peer (even if multiple peers are queried at the same time, the code running the block retrieval runs at least once)
- Writing the block and chainstate to LevelDB
- For each UTXO in the block, rewinding the transaction history to check that it's indeed not spent
- All of the checks involved in
CheckBlock and
CheckBlockHeader functions must ultimately be performed in one way or another, so there will always be a minimum resource consumption associated with it.
Sharding is not going to make any of this go faster, assuming that theoretically the system had enough threads to verify a single transaction in a block per thread, if all this stuff is already running in parallel.
That's not entirely the concept behind sharding when we are talking about blockchain.
Yes, you are right if you want Bitcoin to work as is, but retrieving the blocks individually, checking all of them is computationally difficult and doesn't scale linearly. However, sharding works with the nodes having their groups of shards and storing them. You have fraud proofs, ZK-SNARKS and other detections as counters to penalize rogue actors within the system. If you were to use random sampling to sample the correctness of the blocks, you'll probably be able to ascertain that the blocks are available to the network and corresponds to the fraud proof. However, you need to evaluate the efficacy of the measures again.
The problem doesn't lie with the parallelization of it, but rather the tradeoff with security. Bitcoin doesn't exactly need to make that decision as of now.
If yes, then there is probability to increase the transactions or may be speed up the processing power and reduce the network difficulty so that we can more hashing power thus indirectly giving out more outputs of confirmations.
Am I right to think this way or this assumption is just going to the south pole? Since your statement and whatever NotATether has referred to, contradicts a lot.
Having latency due to addition of Data Sharding architecture is like calling for the roadblocks intentionally. This is based on the Amdahl's Law as explained by the NotATether. On the other hand there are different assumptions to the Sharding architecture by other members.
It seems the concept is either a mismatch or it has not been properly understood.