Search content
Sort by

Showing 20 of 21 results by pondjohn
Post
Topic
Board Services
Re: BitcoinExchange.US [Domain Name]
by
pondjohn
on 10/07/2017, 10:32:52 UTC

This not the right section .you need to put this in auction section so that you can get bidder
https://bitcointalk.org/index.php?board=73.0

also to fer value of your domain you can check here

https://www.estibot.com/

Thanks for your response, so it seems an auction would be the best way to sell such a domain.

I'm first trying to determine the likely value. Sedo's algorithm recommended a price of £449 ($578), but Estibot suggested $45.

Bitcoin domains are quite specialist, I wondered if there was any consensus on the best way to value them, or what this domain could likely achieve?

Many thanks.
Post
Topic
Board Services
Topic OP
BitcoinExchange.US [Domain Name]
by
pondjohn
on 29/06/2017, 14:01:42 UTC
I procured the domain bitcoinexchange.us some time ago because I thought it was rather good.

Perfectly suited to easily exchanging US dollars with Bitcoin. Or to say "Bitcoin Exchange? That's us!"

I wondered if anybody had any ideas what it might be worth, and also the best place to sell it?

Thanks
Post
Topic
Board Project Development
Re: [ANN] Bitcoin PoW Upgrade Initiative
by
pondjohn
on 19/03/2017, 21:24:16 UTC
I believe the best option would be the addition of multiple proofs of work. Perhaps 4 (including SHA256).

There are many benefits: diversity of hardware, it dilutes the impact of centralised technology in one method, doesn't totally punish those who invested in ASICs - more likely to gain support.

Also, we could round robin different types of method, this would mean incompatible hardware would have 'down time', lowering the electric cost in relation to the hardware - also good for decentralisation.

If you had 4 proof of work methods, you could require that 2 seperate methods have found a block before a method is allowed to commence hashing again.

This also would allow for a soft fork if one method became malicious so that method could no longer produce blocks. Keeping each method honest.

I created a topic on this idea, possibly in the wrong location:
https://bitcointalk.org/index.php?topic=1832581.0
Post
Topic
Board Development & Technical Discussion
Re: Proposal: Malice Reactive Proof of Work Additions (MR POWA). Self defense HF
by
pondjohn
on 19/03/2017, 19:28:29 UTC
A big issue with this is that although online nodes might be able to detect a long invalid chain, nodes that were offline at the time (or didn't exist yet) have no way of independently verifying that invalid blocks actually existed then. Maybe the invalid blocks were created much later in order to trigger a PoW change earlier than appropriate on some nodes, splitting the network.

You could make a rule that a block is allowed to change the PoW if it presents headers for an invalid chain of length > 50 or something, with the fork point close to the new-PoW block. So once a long invalid chain comes into existence, anyone can create the first new-PoW block containing the invalid chain's headers. This can be more readily verified later on.

Exactly what the new PoW should be is a complicated issue with years of past discussion already...

That is a good point about verifying the historical existence of invalid blocks.

One of the things we can do with the addition of proof of work rather than a complete change is we can compare the hashpower of the SHA256 between different competing chains. Future nodes will be able to assess the health of a the MR POWA fork compared to the 'malicious' chain. If the POWA fork has a substantially lower hashpower it can be assumed the POWA fork failed to achieve economic significance. A proof of work change (POWC) hardfork has no way to relatively compare different chains and determine whether one likely succeeded economically over the other.
Post
Topic
Board Development & Technical Discussion
Topic OP
Proposal: Malice Reactive Proof of Work Additions (MR POWA). Self defense HF
by
pondjohn
on 18/03/2017, 16:05:50 UTC
I’m very worried about the state of miner centralisation in Bitcoin.

I always felt the centralising effects of ASIC manufacturing would resolve themselves once the first mover advantage had been exhausted and the industry had the opportunity to mature.

I had always assumed initial centralisation would be harmless since miners have no incentive to harm the network. This does not consider the risk of a single entity with sufficient power and either poor, malicious or coerced decision making. I now believe that such centralisation poses a huge risk to the security of Bitcoin and preemptive action needs to be taken to protect the network from malicious actions by any party able to exert influence over a substantial portion of SHA256 hardware.

Inspired by UASF, I believe we should implement a Malicious miner Reactive Proof of Work Additions (MR POWA).

This would be a hard fork activated in response to a malicious attempt by a hashpower majority to introduce a contentious hard fork.

The activation would occur once a fork was detected violating protocol (likely oversize blocks) with a majority of hashpower. The threshold and duration for activation would need to be carefully considered.

I don’t think we should eliminate SHA256 as a hashing method and change POW entirely. That would be throwing the baby out with the bathwater and hurt the non-malicious miners who have invested in hardware, making it harder to gain their support.

Instead I believe we should introduce multiple new proofs of work that are already established and proven within existing altcoin implementations. As an example we could add Scrypt, Ethash and Equihash. Much of the code and mining infrastructure already exists. Diversification of hardware (a mix of CPU and memory intensive methods) would also be positive for decentralisation. Initial difficulty could simply be an estimated portion of existing infrastructure.

This example would mean 4 proofs of work with 40 minute block target difficulty for each. There could also be a rule that two different proofs of work must find a block before a method can start hashing again. This means there would only be 50% of hardware hashing at a time, and a sudden gain or drop in hashpower from a particular method does not dramatically impact the functioning of the network between difficulty adjustments. This also adds protection from attacks by the malicious SHA256 hashpower which could even be required to wait until all other methods have found a block before being allowed to hash again.

50% hashing time would mean that the cost of electricity in relation to hardware would fall by 50%, reducing some of the centralising impact of subsidised or inexpensive electricity in some regions over others.

Such a hard fork could also, counter-intuitively, introduce a block size increase since while we’re hard forking it makes sense to minimise the number of future hard forks where possible. It could also activate SegWit if it hasn’t already.

The beauty of this method is that it creates a huge risk to any malicious actor trying to abuse their position. Ideally, MR POWA would just serve as a deterrent and never activate.

If consensus were to form around a hard fork in the future nodes would be able to upgrade and MR POWA, while automatically activating on non-upgraded nodes, would be of no economic significance: a vestigial chain immediately abandoned with no miner incentive.

I think this would be a great way to help prevent malicious use of hashpower to harm the network. This is the beauty of Bitcoin: for any road block that emerges the economic majority can always find a way around.

Any thoughts?
Post
Topic
Board Development & Technical Discussion
Re: Increasing blocksize dynamically w/economic safeguards - the ideal compromise?
by
pondjohn
on 11/01/2017, 01:58:38 UTC
out of band fees can work in both directions, e.g. including rebates.  (and, in fact rebates can be done inband with coinjoins with no trust)

Also consider what your scheme does when a majority hashpower censors any transaction paying a high inband fee level.

I'm not sure I understand how you're suggesting a rebate would work?

What do you mean by a majority hashpower, a 51% attack? If a transaction has a high fee surely any miner is incentivised to include it in a block?
Post
Topic
Board Development & Technical Discussion
Re: Increasing blocksize dynamically w/economic safeguards - the ideal compromise?
by
pondjohn
on 11/01/2017, 01:20:00 UTC
There are no mandated fees in the Bitcoin protocol so the natural response to schemes like this is for miners to simply accept fees via other means (such as the direct txout method supported by eligius since 2011, or via outputs with empty scriptpubkeys, or via out of band fees) and give users a discount for using them. The expected result would be fees migrating out of the fee area, and protocols that depend on them (like your suggestion) becoming dysfunctional.  Sad

I previously tried to rescue this class of proposal by having the  change not be to fees but by modifying the lowness of the required hash (effective difficulty), but it's difficult to do that in the presence of subsidy.

Unrelated, as you note your proposal is no constraint if miners agree-- this is also why it fails to address the conflict of interest between miners (really mining pools), who are paid to include transactions, and everyone else-- who experiences them as an externalize except to the extent that they contribute to economic growth (not at all a necessity: e.g. many companies want to use the bitcoin blockchain without using the Bitcoin currency at all).  Still, better to solve one issue even if all can't be solved.

I did some analysis of the transaction fees, and used the data to properly demonstrate how all the risks you identify can be mitigated!

Out of band fees can be completely disincentivised.

Please have a read and let me know if you have any thoughts.

https://seebitcoin.com/2017/01/i-analysed-24h-worth-of-transaction-fee-data-and-this-is-what-i-discovered/
Post
Topic
Board Development & Technical Discussion
Re: Increasing blocksize dynamically w/economic safeguards - the ideal compromise?
by
pondjohn
on 09/01/2017, 22:47:10 UTC
Aside from the "fuzzy consensus" bit, would the same shortcomings not apply to nodes signalling for a softfork as is happening currently?  That's measurable, so this should be equally so.

Nodes don't signal for SegWit activation, only miners do.

I assumed nodes in BU signalled their max block size and depth they wanted to be overridden at, but perhaps it is signalled in blocks too - I haven't thoroughly examined the implementation, I just know that there is no consensus on what consensus even is.

Plus, we're apparently prepared to accept shenanigans like client spoofing, when that could have a significantly adverse effect on consensus, because there's no easy way to prevent it.  It's the nature of the beast.  So while I'm of the mindset "never say never", it's highly unlikely we're going to find a completely fool-proof solution.  Hence #1 and #3 being necessary as well.

Accept in what way? Spoofing nodes is only a social manipulation strategy, it does not achieve anything in terms of votes or consensus. The whole point is node counts of signalling can't be trusted. Period. There is no way to assess a nodes contribution to the network. #2 cannot happen. Only miners or transactions can have any sway on the blockchain, as far as the blockchain is concerned nodes are read only.
Post
Topic
Board Development & Technical Discussion
Re: Increasing blocksize dynamically w/economic safeguards - the ideal compromise?
by
pondjohn
on 09/01/2017, 17:57:34 UTC
For me, the ideal compromise should tick three boxes:

    1) An algorithmic element based on transaction volumes, so change only happens when required
    2) A way for both miners *and* nodes to signal the size of the adjustment they are willing to accept, to maintain equilibrium and to ensure miners can't dictate to the rest of the network
    3) Another algorithmic element taking into consideration the average total fees received per block over all the blocks in the previous difficulty period, to ensure economic viability and to avoid rigging by any single pool or entity.

No easy task, for sure.  But it feels like all the elements are there and just need putting together somehow.  BIP106 came very close, but left some important elements missing.  Mainly #2, a way for full nodes to set a "this far and no further" threshold.  It's almost ironic that for all the complaints on this forum about the BU client, #2 is almost exactly what it does (although again, tends to encourage whole numbers and not decimals, adjustments should be in fractions of a MB).

The insurmountable problem with #2, beyond BU's implementation making something absolute like consensus into something fuzzy, is that it is impossible to avoid Sybil manipulation. Also, it isn't really easy to take a poll of all nodes on the network. The closest you could get is asking individual transactions to signal, but that adds extra bloat on chain, and gives the power to users instead of nodes, when really it is a decision for the latter.

Also, a node is just an IP in terms of measuring 'support'.

If you look at nodes you might end up with a super high bandwidth node that serves many concurrent connections, and somebody's tiny Raspberry Pi hobbyist setup over dodgy wifi. Giving each node equal say is ripe for gaming.

In practice, would it not be a case of whoever blinks first loses money?  How else would miners know what to agree on unless someone starts signalling first?

It would introduce some interesting game theory for sure. It is possible that if there was clear consensus to increase the block size miners would avoid 'paying' to signal for an increase at the beginning of a cycle rather than at the end, it would depend on the strength of consensus and how desperate they were to get it 'passed' - a little like law makers.

The stakes are low enough that the real cost is incurred for signalling against consensus over a sustained period. Getting 'caught' occasionally by having to pay to signal because somebody else who shares your goal found a block before you and didn't is not a big deal and would average itself over time.
Post
Topic
Board Development & Technical Discussion
Re: Increasing blocksize dynamically w/economic safeguards - the ideal compromise?
by
pondjohn
on 09/01/2017, 15:02:20 UTC
Another way to do it is instead of averaging out the votes, say 5 votes: 0 + 0 + 1.35% + 2.7% + 2.7% = +1.35% increase.

You could do the option with the most votes wins. As that way miners who are signaling +2.7% are not getting their higher investment diluted down as long as they have consensus. Once the threshold is reached all miners would drop down to not voting until the next round.
Post
Topic
Board Development & Technical Discussion
Re: Increasing blocksize dynamically w/economic safeguards - the ideal compromise?
by
pondjohn
on 09/01/2017, 12:47:14 UTC
There are no mandated fees in the Bitcoin protocol so the natural response to schemes like this is for miners to simply accept fees via other means (such as the direct txout method supported by eligius since 2011, or via outputs with empty scriptpubkeys, or via out of band fees) and give users a discount for using them. The expected result would be fees migrating out of the fee area, and protocols that depend on them (like your suggestion) becoming dysfunctional.  Sad

Hi greg, thanks for taking a look at the idea.

The issue you highlight is why I had the idea to exclude zero/far below mean fee transactions from calculating the fullness of blocks to justify an increase.

Therefore there would be no benefit to the miners who want to accept fees out of band in an effort to avoid paying for a vote to increase the block size, as increasing the block size would not be possible unless the blocks are sufficiently full with transactions that are paying a fee.

This transaction doesn't change the fundamentals of Bitcoin - fees are still not mandated, there is no penalty for including zero fee transactions. Its just that if there are a large number of transactions in blocks that are paying fees substantially below the mean the block size cannot grow, just as it can't anyway at the moment.

I hope what I'm trying to say makes sense. Do you think this could help mitigate the out of band fees problem?
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Topic OP
Increasing blocksize dynamically w/economic safeguards - the ideal compromise?
by
pondjohn
on 09/01/2017, 11:38:28 UTC
⭐ Merited by ETFbitcoin (1)
One side of the block size debate wants to hand over control of the block size to the miners.  Many fear such an implementation would cause catastrophic failures of consensus, and that miners could even be incentivised to bloat the block size at a rate that overly compromises Bitcoin’s decentralistion.

Others are worried that scaling solutions such as Lightning Network and sidechains will take too long and not achieve sufficient gains, stifling Bitcoin’s network effect and preventing its continued exponential growth.

What if there were a way to simultaneously allow for exponential growth on chain if needed – allowing time for layer two solutions to take some heat off the chain, but also creating an economic disincentive for miners trying to inflate the block size arbitrarily.

Such a solution should allow for an exponential increase in block size if miners were in consensus, but require they face an economic risk when signaling for a block size increase where there was no consensus. Cryptoeconomics is built on incentive game theory, why not introduce it here?

Allowing the block size to change dynamically with demand would reduce the risk of requiring additional contentious block size hard forks and hostile debate. I fear a simple 2MB increase would reignite the debate almost as soon as it was activated, we need to buy as much time as possible.

Any solution is going to be a compromise, but by allowing a few years of exponential growth with strict safeguards and appropriate economic incentives we can hopefully achieve that.

So how do we do it?

My basic idea is for miners to vote in each block to increase the block size.

Allowing for exponential growth would mean that the block size could double every year.

This would be achieved by each of the previous 2016 blocks voting to increase the block size by the maximum amount of 2.7% each time. An increase of 2.7% every 2 weeks would result in an annual block size increase of 99.9% (rounding).

We only need to use 3 bits for miners to vote on block size:
000 = not voting
001 = vote no change
011 = vote decrease 2%
101 = vote increase 1.35%, pay 10% of transaction fees to next block
111 = vote increase 2.7%, pay 25% of transaction fees to next block

Not including any transactions in a block will waive a miners’ right to vote.

Each block is a vote, and the block size change could be calculated by averaging out all the votes over 2016 blocks.

In order to achieve an increase in block size, the blocks must also have been sufficiently full to justify one. Transactions with no fee and perhaps outliers far from the mean tx fee/kb should perhaps not be included.

By asking miners to pay a percentage of their transaction fees to the miner of the next block, you discourage miners from stuffing the blocks with transactions to artificially inflate the block size.

If miners are in unanimous agreement that the block size needs to increase, the fees would average out and all miners should still be equally rewarded. Only miners trying to increase the block size when consensus is not there would incur a cost.

There should be a limit on the maximum increase, perhaps 8MB. This isn’t a permanent solution, it is just to create time for Bitcoin to progress, and then re-evaluate things further down the line. Combined with SegWit this should provide a reasonable balance between satisfying those who are worried about missing out on exponential growth for a few years if LN and other solutions are not as fast or effective as hoped.

This is my rough idea for trying to find a compromise we can all get behind. Ant thoughts?

Link to blog post: https://seebitcoin.com/2017/01/dynamic-block-size-with-economic-safeguards-could-this-be-the-solution-that-we-can-all-get-behind/
Post
Topic
Board Bitcoin Discussion
Re: Sidechain protocol creating an addressable "Internet of Blockchains" for scaling
by
pondjohn
on 07/10/2016, 09:05:10 UTC
a side chain still is a chain. still has to store data. but i see the concept you are thinking of.
if you want to be a full node. yes you can shut down your bitcoin node and then run an angel branch/wing:aa9 node that only amounts to storing/validating 0.0244% of transactions due to there being 4096(im presuming by the aa9 hex example) branches/wings(altcoin) of angel.

what you are proposing is that Angel is becomes the 'reserve currency' like the IMF bank and each 'child' blockchain/wing is a bank branch, each with its own sortcode/routing number(aa9).

the end result is when people lock bitcoins, to play with altcoin aa9. they no longer need to run a bitcoin node and miners no longer need to mine bitcoin because they are then "spending" and protecting their value on a IMF(angel) bank branch/wing: aa9. and if that branch/wing chain is a public chain it will only be protected by the 'users' in that branch/wing. which is far LESS secure than everyone concentrating the entire hashpower on just one chain.

That's not how I visioned it.

Less like a bank routing/sort code, more like IP addresses. There is a huge addressable space, however I imagine the market would form consensus around a far far small number of blockchains. IPv6 has an addressable space higher than there are atoms on planet earth, its just better to have an abundance of addressable space than end up with another IPv4. A large addressable space enables a free market where anybody can create a blockchain, but does not make higher numbers of blockchains more viable. There isn't a finite capacity, which would lead to centralisation.

Perhaps the direct children of Angel would be regional, with a main blockchain for Europe, Asia, and then sub blockchains. Each of which has children with higher volumes but more specific use cases. The higher up the tree you go, hashpower is always the sum of all children... security is a tradeoff at the lower levels, but there is no solution that allows blockchains to scale (on chain) without a security tradeoff. The more you pay, the higher the security.

With regards to the incentive to no longer mine the main Bitcoin blockchain, you could have a fee at each level that passes up say 20% of fees right the way to the Angel. This money pools together, and for every merge mined block, the miner gets all that Bitcoin in reward, so merge mining the main chain is rewarded.

thus voiding hash power/difficulty away from bitcoin and then diluting that hashpower/difficulty by splitting it up into different and separate branches. because bitcoin becomes the unused leaf at the edge of the angel tree.

With a hard fork to Bitcoin, the entire system could be merge mined together, which would actually increase security of Bitcoin. In a PoW tree structure, there is no loss of overall network security, there is just more choice where a user can decide to have less security at a lower cost. You can just distribute and scale the PoW more effectively.

this idea seems a way to just push everyone onto a new ALTCOIN called angel and then technically onto many altcoins called angel:000-fff and slowly make bitcoin die when less people are playing with bitcoin due to their coins being locked.

It's not to push at all. If it offers a better system at a lower price, people will take their coins into it. We need to solve the problem of on chain scaling, sidechains are happening however we implement them. In 2140, when there are no more coinbase transactions, it is entirely possible that all coins will have been moved away from the main blockchain to something more versatile and it will be abandoned. That is not a problem, and it is not something to trouble ourselves with now.

the issue i see is that the 'branch managers' then has all the private keys to unlock the bitcoins while the customers are playing with the aa9 chain.
causing replay attacks(double spending via 2 different coins) as there needs to be a privkey somewhere to unlock coins if people wish to return.
(that issue alone of 'who/how the privkeys are managed' needs a solution, before anything else to make the concept viable)
the issue i see is that the rarity/production cap is then evaporated.
the issue i see is that users then have to trust middlemen creating new altcoins.
the issue i see is sending funds to people in different branches becomes that bit more complicated due to not transacting in just 1 chain

The tree structure (where parents are aware of, but not synchronised with children), allows us to do some cool things with security.

The biggest risk is trying to spend on a child blockchain, and then creating a withdraw transaction on a parent so you keep the coins (double spend).

Children synchronise all parents, so if anybody mines a block that is a double spend, the children will see it immediately. They can then broadcast proof of the double spend to all parent nodes, and miners will not build on that block. This allows you to effectively gain additional security from a parent, without additional cost.

Children always follow parents, so if a parent chain reorganises, so do the children. This prevents inconsistencies.

this essentially is a more dangerous idea than LN. because:
LN doesnt impact bitcoins security of hashpower and difficulty as much, if anything.
LN doesnt impact the rarity/deflationary production cap

as i said. all i can see is how the OP wants to 'manage' the next International monetary Fund(angel) and rule the roost, by inventing new bank branches beneath it and causing bitcoin to get down graded into just a small communities insecure credit union. because bitcoin would no longer be at the centre:
Quote
To move Bitcoin between them would involve a slow transfer back to the mainchain, and then out again to a different sidechain.
Could we instead create a protocol for addressable blockchains, all using a shared proof of work, which effectively acts as an Internet of Blockchains?

we should however be thinking about protecting and expanding bitcoin to remain 'unmanaged' by middlemen to become the new IMF, where bitcoin remains the gateway in and out of all altcoins

I don't know how you think the system is being managed by middmen? The gateway is completely decentralised and has no middle men. It is just an unmanaged protocol that nobody has any control over. There are no private keys, no IMF.

Sidechains are still free to have a direct relationship with the main blockchain, and ignore the system, its just that the system facilitates a huge ecosystem and network and all participants benefit, so they'd be silly not to.

It is not a more "dangerous" idea than lightning network, you're comparing apples with oranges. It is an entirely different system, lightning networks are better for day to day payments. This enables something more like Ethereum, a Turing complete machine (or every and any type of blockchain).
Post
Topic
Board Bitcoin Discussion
Topic OP
Sidechain protocol creating an addressable "Internet of Blockchains" for scaling
by
pondjohn
on 06/10/2016, 22:47:24 UTC
Sidechains seem an inevitable tool for scaling. They allow Bitcoins to be transferred from the main blockchain into external blockchains, of which there can be any number with radically different approaches.

In current thinking I have encountered, sidechains are isolated from each other. To move Bitcoin between them would involve a slow transfer back to the mainchain, and then out again to a different sidechain.

Could we instead create a protocol for addressable blockchains, all using a shared proof of work, which effectively acts as an Internet of Blockchains?

Instead of transferring Bitcoin into individual sidechains, you move them into the master sidechain, which I'll call Angel. The Angel blockchain sits at the top of of a tree of blockchains, each of which can have radically different properties, but are all able to transfer Bitcoin and data between each other using a standardised protocol.

Each blockchain has its own address, much like an IP address. The Angel blockchain acts as a registrar, a public record of every blockchain and its properties. Creating a blockchain is as simple as getting a createBlockchain transaction included in an Angel block, with details of parameters such as block creation time, block size limit, etc.

Mining in Angel uses a standardised format, creating hashes which allow all different blockchains to contribute to the same Angel proof of work. Miners must hash the address of the blockchain they are mining, and if they mine a hash of sufficient difficulty for that blockchain they are able to create a block.

Blockchains can have child blockchains, so a child of Angel might have the address aa9, and a child of aa9 might have the address aa9:e4d. The lower down the tree you go, the lower the security, but the lower the transaction fees. If a miner on a lower level produces a hash of sufficient difficulty, they can use it on any parents, up to and including the Angel blockchain, and claim fees on each.

There are so many conflicting visions for how to scale Bitcoin. Angel allows the free market to decide which approaches are successful, and for complementary blockchains with different use cases, such as privacy, or high transaction volume, to more seamlessly exist alongside each other, using Bitcoin as the standard currency.

I wrote this as a TLDR summary for a (still evolving) idea I had on the best approach to scale Bitcoin infinitely. I've written more of my thoughts on the idea at https://seebitcoin.com/2016/09/introducing-buzz-a-turing-complete-concept-for-scaling-bitcoin-to-infinity-and-beyond/

Does anybody think this would be a better, more efficient way of implementing sidechains? It allows infinite scaling, and standardisation allows better pooling of resources.
Post
Topic
Board Bitcoin Discussion
Topic OP
Buzz: an idea to bring infinite, turing complete scaling to Bitcoin
by
pondjohn
on 26/09/2016, 21:01:23 UTC
Hi all. I had an idea to implement infinite scaling, possibly to be used on something like Rootstock.

Please have a look and let me know what you think:
https://seebitcoin.com/2016/09/introducing-buzz-a-turing-complete-concept-for-scaling-bitcoin-to-infinity-and-beyond/
Post
Topic
Board Development & Technical Discussion
Re: Increasing the blocksize as a (generalized) softfork.
by
pondjohn
on 13/01/2016, 01:24:32 UTC
Hi ZoomT, I attempted to provide a non-technical explanation of the principal for how a generalised softfork could be achieved, is this a fair representation of your basic principle?

http://pondpolitics.com/2016/01/hard-vs-soft-fork-is-there-a-third-way-to-increase-the-bitcoin-block-size/
Post
Topic
Board Bitcoin Discussion
Re: Hard vs soft fork: is there a third way to increase the Bitcoin block size?
by
pondjohn
on 12/01/2016, 23:49:48 UTC
It does share elements of a hard fork where the protocol is updated, yes.

I'm not speculating on block size increase, I'm just talking about hypothetically, if one were to occur, what is the best way to achieve it.

This method has the effect of degrading more gracefully for users on the old protocol, rather than suddenly leaving them on their own, vulnerable to double spend attacks.
Post
Topic
Board Bitcoin Discussion
Re: Hard vs soft fork: is there a third way to increase the Bitcoin block size?
by
pondjohn
on 12/01/2016, 23:02:31 UTC
This doesn't make sense to me.

What it is saying is that, to get around a hard fork (and a soft fork), some miners will mine a new block type.
This new block is at 1MB, but can "expand" to bigger than 1MB, by allowing included txs to fall outside of that 1MB limit
and into the "mushroom" section of this new block. When you are in this section, you can not see, verify, or spend (technically).

That means when people use bitcoin, they won't know if their tx will be within the 1MB or the mushroom section.

This alone will cause too much damage to the market place, especially with merchants.
Bitcoin is based on a provable tx. Your now saying we won't be able to prove or see some txs.

It would require that all miners have upgraded, and they all produce a new type of block that can contain more than 1MB of transactions.

People with updated software will be able to view/spend transactions in the new type of block, but people with the old software will find that some of their transactions fail.

Any damage this proposal would do, a hard fork would do too, but probably worse.

A hard fork could result in a merchant caught on the wrong side of the fork sending out an item they think the've received a payment for, when the transaction never existed on the winning newer fork.

In this mushroom blocks scenario, the merchant would just think they had never received the payment if it ended up in the mushroom block, but as soon as the software was upgraded, they would then be able to see that a payment had been received and dispatch the item - without the risk of losing Bitcoin.

Everyone would pretty quickly upgrade you would imagine once they were not receiving payments, and the people sending them told them they need to upgrade.
Post
Topic
Board Bitcoin Discussion
Re: Hard vs soft fork: is there a third way to increase the Bitcoin block size?
by
pondjohn
on 12/01/2016, 22:44:19 UTC
Did you read the article? Here's an extract of the key part:

Quote
There is an entirely different debate about how much or even whether the block size limit needs raising, but here I will solely be doing my best to explain one of the ways a future block size increase could be achieved.

To begin with, I need to explain the difference between a hard and soft fork.

It relates to backwards compatibility. Imagine you and many others are playing an online game where you all roam around a virtual world together collecting flowers to score points. You’re all trying to achieve a high score.

One day, the game releases an update, and adds mushrooms to this world. Just like flowers, the mushrooms can also be collected to score points.

How this upgrade is implemented can be compared to a soft fork and a hard fork.

If the upgraded version of the software is not compatible with the old version, you have a hard fork. In this situation, anybody who updates their software will be in a new world, we’ll call it mushroom land, and anybody who has the old software will be in flower land. The people in mushroom land know they arrived there through an update. Back in flower land, people may wonder where all the other players disappeared to, but will continue to continue playing amongst themselves, not realising that everybody else is playing an updated version of the game, and that the high scores they’ve worked so hard to achieve will be lost when they eventually realise that they need to upgrade to mushroom land.

If the upgraded software has backwards compatibility with the old version, you have yourself a soft fork. In this situation, maybe people with the upgrade can see mushrooms, but to those who haven’t, the new mushrooms just look like more flowers. Crucially, people with the old software can still play with those who have upgraded, and even though the world looks a little different – ultimately everybody is still playing the same game.

So is there a third way?
Maybe there is. Let’s imagine that for technical reasons there is no way mushrooms can be introduced, even as flowers, without a software upgrade. Perhaps though, all players can participate in the same virtual world, except that those with the upgrade can also see mushrooms, while those without cannot. Crucially, people are not playing a completely different game, and when someone in flower land does upgrade they can take their score with them, and start collecting mushrooms without losing out.

How can this be achieved in Bitcoin?
Instead of just losing points, a worst case scenario for a Bitcoin hard fork would be people losing money. Whatever happens, there must always be a block produced with 1MB or fewer transactions, otherwise Bitcoin will hard fork as old software will not recognise the new block.

So how do we prevent anybody losing money? We create a mushroom block. This mushroom block can be larger than 1MB. The existing 1MB block remains in place as a legacy block. Transactions that make it into the legacy block can be viewed by everybody on the network, those in the mushroom block can only be seen by those who have upgraded.

Once any Bitcoin has made its way into a mushroom block, it can not be viewed or spent again by software that has not been upgraded. This isn’t quite a hard fork though as people on the old software will still be participating in the same network, they just won’t be able to see all the transactions that are actually taking place. If they try and spend Bitcoin that appears to be theirs on the old software, but actually has been spent on a mushroom block and belongs to someone else, all that will happen is that the miners will reject the transaction and they will not be able to spend it.

Ultimately, this is not ideal, and while it would no doubt create confusion, it would not create a complete hard fork and should not lead to anybody losing Bitcoin.

The only people who would actually need to upgrade for this to take place would be the miners. Technically speaking, they could have already done this and none of us would know. Since mining is far more centralised and dominated by a smaller group with a huge financial stakes, it is likely that such an upgrade could be achieved rapidly, once the software is ready of course. For everyone who hasn’t upgraded, Bitcoin would just become increasingly less useful until they did so.
Post
Topic
Board Bitcoin Discussion
Re: Can Bitcoin make Banks disappear?
by
pondjohn
on 12/01/2016, 21:35:08 UTC
I actually explored this exact topic in a recent blog post you should find interesting.

Bitcoin alone will not be the end of the banks, but the end of the banks may well be coming.

Check it out: http://pondpolitics.com/2015/12/should-banks-fear-bitcoin/