Search content
Sort by

Showing 20 of 24 results by Jeweller
Post
Topic
Board Service Announcements (Altcoins)
Re: Just-Dice.com : FREE BTC : Play or Invest : 1% House Edge : Banter++
by
Jeweller
on 25/06/2013, 16:00:03 UTC
Dooglus, great idea, great site.  I guess this is the inevitable result of watching S.Dice closely for a year, scratching your head, and finally going, well gee, if this made 70K BTC, I could do that...

I invested a couple BTC, and the 'house is up' number promptly went down!  I thought there was a 1% house edge!!
(I know, just kidding.  Investing is also gambling in a way, huh.  Just a bit surprising how quickly it dropped 4 BTC.  Someone got lucky... and it wasn't me.  Yet.)
(Then I started thinking, well it's up about 3% and it should only be up 1%, so I shouldn't invest until... no wait, that's just a fallacy in reverse... etc)

I wonder about the 0BTC bets.  I guess it's a cool advertising feature so that people can try it out before putting real BTC in, but what if you start getting thousands of 0-bets a second?  As an "investor" I'd like to know a little more about scaling / anti-DDOS stuff.  Also will the 1% of 1% be enough to fund the server?  And your time?  I hope so.

I'll probably just invest it, and don't particularly need the extra 0.01, but if you're still handing them out, can't really say no!
user:Jeweller ID:2090
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin Holders Must Prepare For Mass Adoption
by
Jeweller
on 07/05/2013, 06:00:54 UTC
Looking at the "penetration" of bitcoin by country, I'm surprised at how low Japan is on the list. I would think that the technophilic home country of MtGox would find the Bitcoin story interesting, especially given that it was created by "Satoshi Nakamoto."

And the new quantitative easing (money-printing) policy of Abe's government should give Japanese a strong incentive to look into hedges against inflation, such as Bitcoin.

Not to mention that MtGox is located in Japan. Maybe it's because many live in small apartments where a noisy mining rig would be quite disturbing.

It is quite perplexing.  I really don't think it's because of small apartments.  It's mainly because nobody here has ever heard of it.  The JPY market on MtGox has extremely low volume, despite the fact that it's very fast and easy to get yen onto MtGox.

The other weird thing is that foreign exchange trading works as a de facto legal gambling industry here.  I don't have any good figures for the popularity, but in subways / trains / TV there are countless ads for FX trading platforms.  Mostly JPY / USD / EUR I guess.  Not sure if you actually get to take delivery or if it's more of a derivative like the CFD stuff which seems more popular in the EU.

But yeah.  No bitcoin awareness in Japan, no media, nothing.  Who knows why.  Might be that people are pretty old & conservative; I wouldn't expect my grandparents, nor Warren Buffet, nor most Japanese elderly people to understand bitcoin.
Post
Topic
Board Press
Topic OP
2013-04-07 New York Times - Bubble or no virtual Bitcoins show real worth
by
Jeweller
on 08/04/2013, 06:17:23 UTC
http://www.nytimes.com/2013/04/08/business/media/bubble-or-no-virtual-bitcoins-show-real-worth.html

NYT is pretty big.  The article is not too bad, though still saying it's mostly for gambling and drugs.

The article links to some talk last week in okinawa here, which has some paper's I, and I think many here would be interested in reading if they're available.
Post
Topic
Board Bitcoin Discussion
Re: offline QR Code software with selectable error correction level?
by
Jeweller
on 02/04/2013, 06:25:54 UTC
Is there any offline QR code creation or generation software with selectable error correction level? I have kept looking, but almost all the ones I see are online.


Can't help with windows but linux has QtQr.  Works great.
I wouldn't feel safe generating and printing private keys in windows anyway though.  Just get a usb flash drive and put some bootable linux on it, and boot from there.

And maybe unplug your sata cables if you're real paranoid.
Post
Topic
Board Development & Technical Discussion
Re: Blocking the creation of outputs that don't make economic sense to spend
by
Jeweller
on 10/03/2013, 23:18:39 UTC
In my opinion, if full nodes are too expensive to run for enthusiasts and require funding by the miners, bitcoin will die.

Hence the 100,000 nodes, which would be government agencies, universities, big companies, big non profit organizations and enthusiasts (wealthy individuals).

There's no way bitcoin can survive with 1000 full nodes.

I disagree with this.  In fact, I think this type of scenario (full, non-mining nodes being too expensive) is almost inevitable.  I also don't see why 1,000 or under = death, but I have a feeling there will be more nodes than that.

Basically, why run a full node if you're not mining?  Seriously.  Right now I run a full node, because, hey why not.  I already have a computer, internet, all that stuff, it's no big deal for me.  It basically doesn't cost me anything.  I just fire it up in the background when I run my computer and it goes and catches up with the blockchain.

But I get no direct benefit from running the software really.  I know the general argument is this: I gain by not having to trust the miners.  I can verify ever transaction in a published block and see if it's kosher.

To me, this is not really relevant.  If I'm on my computer, and I get block A that is pushed to me, but my full-node sees as invalid, OK, I can drop it.
But what if the next block, block B, is built on block A?  And then after that C?  What am I supposed to do, throw up my hands and say, no, you guys are all wrong!!!

Basically, if you mistrust ALL the miners, being able to look at blocks and reject them as invalid is pretty much useless.  If the mining network has actually been completely compromised, well that's pretty much game over anyway.

Another argument would be, well maybe someone's got control of my internet connection, and is feeding me made-up blocks.  But in that case you should see the block production rate collapse, because an attacker doing so would have vastly less hashing power than the total network.  So maybe he can do that with one block, but the 2nd or 3rd in that false-chain will take hours.

Basically, why not let only miners run the full nodes?  Maybe you can worry about "centralization", true, but recall that initially, every full node was a mining node, or could be.  Mining has now become more centralized because of hardware requirements.  I expect that full-nodes will do the same, and substantially all full nodes will be engaged in mining in the future, because the costs of running a full node will continue to increase, with no concurrent increase in benefit.  Once holding on to the whole blockchain becomes economically significant, you'll think, hey, I'm spending all this money to run a full node, I might as well mine while I'm at it, since mining has all of the same requirements.

Are there plausible scenarios where the _average_ home user runs full nodes years from now?
Also... why is this a problem?  Thousands of miners running the bitcoin network, and being compensated to do so.  Normal users connect to those miners and bid for transaction space.  Sounds OK.
Post
Topic
Board Speculation
Re: CoinLab News = Price collapse
by
Jeweller
on 01/03/2013, 03:06:07 UTC
The real goal would be to help the public know they really can and should demand proof that their deposits are safe and sound and not fractionally pledged, the same way "provably fair gaming" is teaching us by example that we should demand, well, provably fair games.

In turn, this would cut down the number of "paper bitcoins" in existence (or rather, I should say, "bitcoin-denominated promises", because the "paper bitcoins" I have in mind - like bitaddress.org and bip38 - are actually close to the best kind of bitcoins you can have!)

Awesome, awesome idea.  I'm replying without having really thought through much (and indeed somewhat off-topic from the OP) but that sounds like it would work, right?  Because of the non-fractional reserve, you'd have to pay some kind of fee to store your BTC at the First Bank of Casascius, though.  So basically you're paying a party to securely store your BTC for you.  Which is the whole idea of the original purpose of banks,  except now the fact that they're actually holding on to it is instantly verifiable.  Cool.

You haven't, like, patented that idea or anything yet, have you?  heh.
Post
Topic
Board Bitcoin Discussion
Re: Max Block Size Limit: the community view [Vote - results in 14 days]
by
Jeweller
on 21/02/2013, 13:01:42 UTC
My answer is none of the above.
Might I suggest another choice:

"Whatever everyone else is doing."

This position could be criticized as sheep-like, certainly, unthinking, just following.  But that is exactly what I think is best.  Honestly I don't really think a 1MB block limit is The End of Bitcoin.  And a 100MB block limit wouldn't be either.  Some kind of well designed variable limit based on fees over the last 2016 blocks, or difficulty, or something smart, sure, that'd be OK too.

You know what WOULD be The End of Bitcoin though?  If half the people stick to 1MB blocks, and the other half go to 10MB.  Then that's it, game over man.  MTGox price per bitcoin would plummet, right?  Because... MTGox price per WHICH bitcoin?  Etc.

So I'll go with what everyone else is doing.  And everyone else should too.  (There may be some logical feedback loop there...)  If there is a fork, it needs to come into effect only after substantially all of the last few weeks worth of transactions are all with a big-block compatible client, miners, everything.  Only then should any change be made.
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 06/02/2013, 08:41:49 UTC
misterbigg - interesting idea, and I agree with your stance but here are some problems.  While it seems intuitively clear that, “Hm, if transaction fees are 3% of total bitcoins transmitted, that’s too high, the potential block space needs to expand.”

Problem is, how do you measure the number of bitcoins transmitted?

Address A has 100BTC.  It sends 98BTC to address B and 1BTC to address C.  How many bitcoins were spent?

Probably 1, right?  That is the assumption the blockchaininfo site makes.  But maybe it’s actually sending 98.  Or none.  Or 99, to two separate parties.  So we can’t know the actual transfer amount.  We can assume the maximum.  But then that means anyone with a large balance which is fairly concentrated address-wise is going to skew that “fee %” statistic way down.  In the above transaction, you know that somewhere between 0 and 100BTC were transferred.  The fee was 1BTC.  Was the fee 100% or 1%?

This also opens it up to manipulation.  Someone could mine otherwise empty blocks with enormous looking fees... which, since they mined the block, they get back, costing them nothing.  They could then work to expand the block size for whatever nefarious reason.

So while I think the “fees as % of transfer” is a nice number to work with in theory, in practice it’s not really available.  If we want to maintain scarcity of transactions in the blockchain while still having a way to expand it, I think the (total fee) / (block reward) is a good metric because it scales with time and maintains miner incentive.  While in its simplistic form it is also somewhat open to manipulation, you could just have an average of 10 blocks or so, and if an attacker is publishing 10 blocks in a row you’ve got way bigger problems. (Also I don’t think a temporary block size increase attack in really that damaging... within reason, we can put up with occasional spam.  Hack, we’ve all got a gig of S.Dice gambling on our drives right now.)
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 04/02/2013, 06:39:40 UTC
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Unfortunately it's not that simple for a couple reasons.

First, right now clients will reject oversized blocks from miners.  Other miners aren't the only ones who need to store the blocks, all full nodes do even if they just want to transact without mining.  So what if all the miners are fine with the 1-GB block and none of the clients nodes are?  Total mess.  Miners are minting coins only other miners recognize, and as far as clients are concerned the network hash rate has just plummeted.

Second, right now we have a very clear method for determining the "true" blockchain. It's the valid chain with the most work.  "Most work" is easily verified, everyone will agree.  "Valid" is also easily tested with unambiguous rules, and everyone will agree.  Miners can't "simply drop" blocks they don't like.  Maybe if that block is at depth -1 from the current block, sure.  But what if someone publishes a 1GB block, then someone else publishes a 1MB block on top of that?  Do you ignore both?  How far back do you go to start your own chain and try to orphan that whole over-size branch?

I think you can see the mess this would create.  The bitcoin network needs to operate with nearly unanimous consensus.
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 03/02/2013, 12:17:55 UTC
Wait, no, I spoke too soon.  The fee/reward ratio is a bit too simplistic. 
An attacker could publish one of those botnet type of blocks with 0 transactions.  But instead, fill the block with spam transactions that were never actually sent through the network and where all inputs and outputs controlled by the attacker.  Since the attacker also mines the block, he then gets back the large fee.  This would allow an attacker to publish oversized spam blocks where the size is only limited by the number of bitcoins the attacker controls, and it doesn't cost the attacker anything.  In fact he gets 25BTC with each sucessful attack.  So an attacker controlling 1000BTC could force a 40MB spam block into the blockchain whenever he mines a block.

Not the end of the world, but ugly. 
There are probably other holes in the idea too.
Anyway, I'm just suggesting that something akin to a (total fee/block reward) calculation may be useful.  Not sure how you'd filter out spammers with lots of bitcoins.  And filtering out spammers was the whole point (at least according to Satoshi's comments) of the initial 1MB limit.

I'll keep pondering this, though I guess it's more about WHAT the fork might be, rather than HOW (or IF) to do it.
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 03/02/2013, 11:52:57 UTC
Some ideas to throw into the pile:

Idea 1: Quasi-unanimous forking.

If the block size for is attempted, it is critical to minimize disruption to the network.  Setting it up well in advance based on block number is OK, but that lacks any kind of feedback mechanism.  I think something like
Code:
if( block_number > 300000) AND ( previous 100 blocks are all version > 2)
then { go on and up the MAX_BLOCK_SIZE }
Maybe 100 isn't enough, if all of the blocks in a fairly long sequence have been published by miners who have upgraded, that's a good indication that a very large super-majority of the network has switched over.  I remember reading something like this in the qt-client documentation (version 1 -> 2?) but can't seem to find it.

Alternatively, instead of just relying on block header versions, also look at the transaction data format version (first 4 bytes of a tx message header). Looking at the protocol it seems that every tx published in the block will also have that version field, so we could even say "no more than 1% of all transactions in the last 1000 blocks of version 2 means it's OK to switch to version 3".

This has the disadvantage of possibly taking forever if there are even a few holdouts (da2ce7? Grin), but my thinking is that agreement and avoiding a split blockchain is of primary importance and a block size change should only happen if it's almost unanimous.  Granted, "almost" is ambiguous: 95%?  99%?  Something like that though.  So that anyone who hasn't upgraded for a long time, and somehow ignored all the advisories would just see blocks stop coming in.

Idea 2:  Measuring the "Unconfirmable Transaction Ratio"
I agree with gmaxwell that an unlimited max block size, long term, could mean disaster.  While we have the 25BTC reward coming in now, I think competition for block space will more securely incentivize mining once the block reward incentive has diminished.  So basically, blocks should be full.  In a bitcoin network 10 years down the road, the max_block_size should be a limitation that we're hitting basically every block so that fees actually mean something.  Lets say there are 5MB of potential transactions that want to get published, and only 1MB can due to the size limit.  You could then say there's a 20% block inclusion rate, in that 20% of the outstanding unconfirmed transactions made it into the current block.

I realize this is a big oversimplification and you would need to more clearly define what constitutes that 5MB "potential" pool.  Basically you want a nice number of how much WOULD be confirmed, except can't be due to space constraints.  Every miner would report a different ratio given their inclusion criteria.  But this ratio seems like an important aspect of a healthy late-stage network.  (By late-stage I mean most of the coins have been mined)  Some feedback toward maintaining this ratio would seem to alleviate worries about mining incentives. 

Which leads to:

Idea 3:  Fee / reward ratio block sizing.

This may have been previously proposed as it is fairly simple.  (Sorry if it has; I haven't seen it but there may be threads I haven't read.)

What if you said:
Code:
MAX_BLOCK_SIZE = 1MB + ((total_block_fees / block_reward)*1MB)
so that if the block size would scale up as the multiple of the reward.  So right now, if you wanted a 2MB block, there would need 25BTC total fees in that block.  If you wanted a 10MB block, that's 250BTC in fees.

In 4 years, when the reward is 12.5BTC, 250BTC in fees will allow for a 20MB block.
It's nice and simple and seems to address many of the concerns raised here.  It does not remove the freedom for miners to decide on fees -- blocks under 1MB have the same fee rules.  Other nodes will recognize a multiple-megabyte block as valid if the block had tx fees in excess of the reward (indicative of a high unconfirmable transaction ratio.)

Problems with this is it doesn't work long term because the reward goes to zero.  So maybe put a "REAL" max size at 1GB or something, as ugly as that is.  Short / medium term though it seems like it would work. You may get an exponentially growing max_block size, but it's really slow (doubles every few years).  Problems I can think of are an attacker including huge transaction fees just to bloat the block chain, but that would be a very expensive attack.  Even if the attacker controlled his own miners, there's a high risk he wouldn't mine his own high-fee transaction.

Please let me know what you think of these ideas, not because I think we need to implement them now, but because I think thorough discussion of the issue can be quite useful for the time when / if the block size changes.
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 31/01/2013, 15:20:15 UTC
Mike Hearn - Sorry if this feels like a redundant question, or that it's decreasing the signal to noise ratio here in any way.  I suppose at it's base it's not really an answerable question: what's the future of bitcoin?  We'll have to see.

What's interesting is that there seem to be two fairly strongly divergent viewpoints on this matter: some people assume the transaction network will continue to grow to rival paypal or even credit cards, and see the block size limit as an unimportant detail that will be quickly changed when needed.  Others see the limit as a fundamental requirement, or even dogma, of the bitcoin project, and view the long term network as mainly an international high-value payment system, or the backing of derivative currencies.  Both views seem reasonable, yet mutually exclusive.

I don't see this kind of disagreement with other often-brought up and redundant issues, such as "satoshi's aren't small enough", "people losing coins means eventually there won't be anything left" and so on.  Those aren't real problems.  I'm not saying the 1MB limit is a "problem" though, I just want to know what people are going to do, and what's going to happen.  Regardless of anyone's opinion on the issue, given the large number of people using bitcoin, the ease with which the change can be made, and the impending demand for more transactions, someone will compile a client with a larger block limit.  What if people want to start using it?

I can see this issue limit bitcoin acceptance as a payment for websites: why go to all the trouble of implementing high a high security bitcoin processing system for your e-commerce site if in a couple years bitcoin won't be usable for small transactions?  Maybe it will in fact scale up, but without any clear path for how that would happen, many will choose to wait on bitcoin and see what evolves rather than adopt it for their organization.

Sorry if I'm being dense -- from the wiki this is indeed classified as "Probably not a problem", and if some developers come on here and told me, "Quiet, you, it's being worked on," I would concede the point to them.  To me though the uncertainty itself of whether the 1MB limit will remain gives me pause.  The threads from 3 years ago debating the same topic perhaps make this conversation redundant, but don't settle the issue for me: this was a tricky problem 3 years ago, and is still.  The only thing that's changed with regard to the block size is that we're getting ever closer to hitting the limit.

Perhaps this is a conversation we'll just need to have in a year or so when the blocks are saturated.
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 31/01/2013, 12:13:41 UTC
caveden - Thanks for the link to the thread from 2010.  It's interesting that many people, including Satoshi, were discussing this long before the limit was approached.  And I definitely agree that having it hard-coded in will make it much harder to change in 2014 than in 2010.

da2ce7 - I understand your opposition to any protocol change.  Makes sense; we signed up for 1MB blocks, so that's what we stay with.  What I'd like to know is, what would your response be if there was a widespread protocol change?  If version 0.9 of the qt-client had some type of increased, or floating max block size (presumably with something like solex proposes), would you:

- refuse the change and go with a small-block client
- grudgingly accept it and upgrade to a big-block client
- give up on bitcoin all together?

I worry about this scenario from a store-of-value point of view.  Bitcoins are worth something because of the decentralized consensus of the block chain.  To me, anything that threatens that consensus threatens the value of my bitcoins.  So in my case, whether I'm on the big-block side or the small-block side, I'm actually just going to side with whichever side is bigger, because I feel the maintenance of consensus is more valuable than any benefits / problems based on the details of the protocol.  Saying you reject it on "moral" terms though makes me think you might not be willing to make that kind of pragmatic compromise.

That said, 1MB is really small.  I'm trying to envision a world-finance-dominating network with 1MB blocks every 10 minutes and it's tough.  While there are lots of great ideas, it does seem to defeat the purpose a little bit to have the vast majority of transactions taking place outside the blockchain. 
And if the 1MB limit does stay, it calls in to question the need for client improvements in terms efficiency and so on.  If the blocks never get appreciably bigger than they do now, well any half-decent laptop made in the past few years can handle being a full node with no problem.

Perhaps a better question then I'd like to ask people here is: The year is 2015.  Every block is a megabyte.  Someone wrote a new big-block client fork, and people are switching.  What will you do?
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 31/01/2013, 08:18:30 UTC
Wow - thanks for the quick, extremely well crafted responses.

da2ce7 - sorry if this is an old topic; I think my confusion stems from the wiki -- it strongly implies a consensus that the size limit will be lifted.

gmaxwell - thanks; I was hadn't thought through how the blockchain would actually fork.  Yeah, you really would immediately get two completely separate chains.  Yikes.

In general I agree, the block size needs to be limited so that tx fees incentivize mining.  Overly high limits mean someone, somewhere, will mine for free, allowing people to low-ball transactions, and ruining mining incentives in general.

What I meant by the IPv4 thing is that... 1MB?  That's it?  Like 500,000 tx a day.  If only they had said 100MB, that wouldn't have really made any difference in the long run, and then millions of people could get their transaction in there every day.  Which is what I've often thought about with IP addresses: if only they'd done 6-bytes like a hardware MAC address, then maybe we wouldn't have to worry about it...

So, the wiki should be changed, right?  I'd say just reading this thread, anyone holding bitcoins, from a conservative perspective would want to avoid the chaos of a split blockchain at all costs, and not consider changing the protocol numbers.  I had been under the impression, and I think many others are, that the network (not just the currency) would in fact be scaling up enormously in the future.

As for centralization, then, the decentralization of the bitcoin transaction network will then suffer in a way.  Right now, anyone can send their bitcoins wherever they wish.  Years from now, when people are bidding against each other for space in the constantly over-crowded blockchain, no normal people will be able to make on-chain, published transactions...
Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
Jeweller
on 31/01/2013, 07:47:12 UTC
The first thing you need to understand that it's not just a matter of the majority of miners for a hard fork.... it's got to be pretty much everybody.

Quite true.  In fact even more so because "old" protocol nodes will only accept small blocks, while the "new" protocol nodes will accept either small (<1MB) or large (>1MB) blocks.  Thus all blocks produced by old miners will be accepted by the new ones as valid, even when there's an extra 500KB of transactions waiting in line to be published.

You'd need like a >90%, simultaneous switch to avoid total chaos.  In that case substantially all the blocks published would be >1MB, and the old protocol miners wouldn't be able to keep up.  If normal nodes switched at the same time, they would start pushing transactions that old-protocol clients / miners would lose track of.  It seems very likely that when / if the change takes place, blocks will have been at the 1MB limit for some time and the end of the limit would immediately result in 1.5MB blocks, so it would have to be coordinated well in advance.

Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Topic OP
The MAX_BLOCK_SIZE fork
by
Jeweller
on 31/01/2013, 07:23:52 UTC
⭐ Merited by ETFbitcoin (1)
I’d like to discuss the scalability of the bitcoin network, specifically the current maximum block size of 1 megabyte.  The bitcoin wiki states:
Quote
Today the Bitcoin network is restricted to a sustained rate of 7 tps by some artificial limits. … Once those limits are lifted, the maximum transaction rate will go up significantly.
… and then goes on to theorize about transaction rates many orders of magnitude higher.  Certainly from a software engineering point of view, medium-term scalability is a trivial problem. An extra zero in the
Code:
static const unsigned int MAX_BLOCK_SIZE = 1000000;
line would be fine for a good while.  But I think dismissing the block size issue as the wiki and many others have done is a serious mistake.

Some background on the arguments can be found in this thread and others.

Changing this limit needs to be discussed now, before we start hitting it.  Already a quick glance at the blockchain shows plenty of blocks exceeding 300KB.  Granted most of that’s probably S.Dice, but nobody can really dispute that bitcoin is rapidly growing, and will hit the 1MB ceiling fairly soon.

So... what happens then?  What is the method for implementing a hard fork?  No precedent, right?  Do we have a meeting?  With who?  Vote?  Ultimately it’s the miners that get to decide, right?  What if the miners like the 1MB limit, because they think the imposed scarcity of blockchain space will lead to higher transaction fees, and more bitcoin for them?  How do we decide on these things when nobody is really in charge?  Is a fork really going to happen at all?

Personally I would disagree with any pro-1MB miners, and think that it’s in everyone’s interest, miners included, to expand the limit.  I think any potential reductions in fees would be exceeded by the increased value of the block reward as the utility of the network expands.  But this is a source of significant uncertainty for me -- I just don’t know how it’s going to play out.  I wouldn’t be surprised if we are in fact stuck with the 1MB limit simply because we have no real way to build a consensus and switch.  Certainly not the end of bitcoin, but personally it would be disappointing.  A good analogue would be the 4-byte addresses of IPv4... all over again.  You can get around it (NAT), and you can fix it (IPv6) but the former is annoying and the latter is taking forever.

So what do you think?  Will we address this issue?  Before or after every block ≈ 1,000,000 bytes?
Post
Topic
Board Economics
Re: Article: Bank of Japan inflates the yen to infinity and beyond starting 2014
by
Jeweller
on 27/01/2013, 05:18:43 UTC
My guess is MtGox keeps a fairly low profile in Japan?

Yes.  I can add my data point here; there is very, very, little knowledge of Bitcoin in Japan, even among tech-minded types.

In my case, I had held a lot of JPY which had done fairly well for me (vs for example USD) but that seems to have ended.  Should have bought more BTC before Abe-kun came in.

The debt expansion / QE / printing / whatever you call it is on the news here.  There was also an NHK peice a few days ago where they talked to grandmas in line to buy gold bars / coins at some dealer in Tokyo.  Apparently there's that, and also people going into stocks.

So that's what's on TV.  Who knows what's actually happening.

I can get from JPY physical cash to an MTGox accoount at well under 0.1% transfer fee here.  Takes like an hour.  I Bet if I opened a Suitomo account I could get that even lower, to maybe a flat 200 JPY fee or so.  Non-anonymous bank transfer, but still, very, very easy.

So in that sense Japan is quite well set up for Bitcoin adoption.

I sort of worry that the whole reason MTGox is in Japan though is that Japanese regulators have no idea what MTGox is doing, in fact no regulator has ever even heard of it... in fact likely the bank MTGox uses probably has no idea either... and for all involved, trying to find out or ask is not in their job description.  We'll see how long this situation lasts.
Post
Topic
Board Development & Technical Discussion
Re: SatoshiDice, lack of remedies, and poor ISP options are pushing me toward "Lite"
by
Jeweller
on 03/01/2013, 12:14:46 UTC
A limit on the data size of a block isn't the only or even the most efficient way to keep demand for transactions high. The marginal cost of handling the tx size is expected to be much lower than the amortized cost of hashing to secure it, so artificial scarcity in the former to sponsor the latter creates perverse incentives.

I agree here: Limiting a block to 1MB feels strange and arbitrary.  More so given that 1MB is pretty much nothing for today's computers, and many times this transaction rate could be supported with no real problems other than removing the block size limit.

This leaves the problem of sponsoring hashing. If the marginal cost of a transaction is C, including a transaction with fee C+epsilon is pure profit. There is no reason for a small miner not to include it. This means epsilon will tend to 0 and total revenue from txs will be equal to the resource cost needed to handle them, leaving no revenue to sponsor the very expensive hashing.

So if I understand what you're saying here, it's that any self-interested solo miner (not a cartel) will include transactions as long as it's profitable for him / her to do so.  Or rather, not unprofitable / loss making.  We see this now; miners include transactions with no fees, with no direct benefit to themselves.  Assuming no block size limit, the negative feedback loop would simply be processing cost to the miner vs. transaction fee, which would tend to make for very cheap transactions.

This is basically how it works now.  C+epsilon for each transaction included, with both C and epsilon equal to 0 in most transactions right now.  Right now what's determining effort into block publishing is the block reward, which overwhelms transaction fees, essentially subsidizing them.

The block reward will become negligible, but that's many years ahead of us.  Long before that, within a year or two, we're likely to start hitting block size limits somewhat regularly, and after that, perhaps constantly.

I'm not certain an unconstrained block size can work.  But I think it's highly likely it can, and I've not read anything to persuade me otherwise.
If the market wants something (here, more hashing power) it will pay for it, like anything else.

So why not give it a go?  If it's a disaster, there will be no problem getting the 50% consensus to put one back, right?

Yeah, why not! Grin
Seriously though I'm inclined to agree here as well -- I bet removing the size limit wouldn't kill bitcoin.  But here's the thing-- while we're debating these issues on bitcointalk.org (and sorry if this is an old / tired debate -- it certainly isn't to me however) people are building new bitcoin businesses and investments and all sorts of stuff.  And the blockchain is growing, and miners are mining, and the average transactions / sec and block size are going up.  If we all came to a unanimous decision about a protocol change, it would still be a big pain to switch / fork.  But since we can't even agree, and developers are saying basically "we'll see what happens", to me that means the 1MB limit is going to be with us for quite a while, for good or bad.

So I'm still trying to figure out myself if "artificially" constrained block sizes is "good" or "bad" for bitcoin.  My feeling is that either with the 1MB limit, or without a hard limit, the system will work, but it will work for somewhat different purposes. 

But would you all agree that regardless of desirability, we will see blocks really hitting that limit hard, and stuff really affected by it, before any protocol change occurs? (If it ever does.)
Post
Topic
Board Development & Technical Discussion
Re: SatoshiDice, lack of remedies, and poor ISP options are pushing me toward "Lite"
by
Jeweller
on 02/01/2013, 14:11:14 UTC
This is one of the most interesting threads I've seen here -- made an account to reply.

A lot of people have different ideas about the importance of this block size limit, and I think it's useful to separate opinions about what "should" happen, and more objective analysis of what "will" happen.

And thinking about it, to me it looks like the 1MB block size is here to stay.  I'm not really sure what that means for bitcoin, if it's "good" or "bad", but here's why I think it's permanent.

First, it's a hard fork.  Every node right now, to my knowledge, will reject any block coming in at over 1MB.  Miners included.  ANY kind of hard fork is going to be really hard to implement due to the size of the network.  But if a hard fork does benefit everyone, or almost everyone, it could be planned in advance and switched to at some date.  As long as most (substantially more than 51%) of nodes switch to the new protocol in a coordinated way, a hard fork change could work. (BTW, has this ever happened?  I haven't heard of it but maybe it has?)

But in the case of removing the 1MB limit, I don't think the agents involved here will agree.  Specifically, the miners.  SatoshiDice would certainly be on board for 1GB blocks, as would BitPay, Gox, and I'd imagine most end users.  But I think miners have incentive to maintain the 1MB limit.  The limit creates an (artificial?) scarcity for what they are providing / selling: inclusion in the block chain.  If there's only room for ~4K transactions per block, thus only ~1M transactions a day, a spot in that blockchain is going to be quite valuable if millions of people want in.  Transaction fees could be quite high, making only large transfers feasible.

A transition to larger block sizes would thus be resisted by miners.  While you might say, "Well, at 10MB / block there's potential for 10X as many transactions, thus 10X the fees" I don't think it would work that way.  But in fact, I don't think this will ever be tested; all you need is miners to resist this change due to uncertainty.  Some miners might like to increase the size limit, but many won't.  Having large groups of miners disagreeing on fundamental protocol aspects sounds like a disaster for bitcoin.  Basically, to me, the network of miners IS bitcoin; they provide the power to run the network, and are compensated to do so.  If the miners aren't 100% for it, it's just not going to happen.

So I'd like to hear from people who agree, but mostly who disagree with me; how do you think this modification of the protocol could play out?  How would the miners get on board?  Do you think miners would in fact want bigger blocks, and why?  This seems like a very relevant discussion to have right now.  We could be hitting 1MB blocks in a matter of months.
Post
Topic
Board Beginners & Help
Re: what about privacy? if all transactions are public, you can easily trace account
by
Jeweller
on 02/01/2013, 13:24:19 UTC
One thing to understand is that it's trivially easy to create a new address.  Takes your computer a fraction of a second.  So you can create thousands of addresses, have bitcoins sent to any of them, sent from any of them, and have them sending to each other, internally.  From the outside, nobody knows who controls which address though.

But they can try to guess, and trace things back.  http://blockchain.info/ lets you look around all the transactions.

If you're really paranoid, there are "mixing services" that let you send bitcoins to them, and have bitcoins come to a set of addresses you specify at some later time, from different addresses than where you sent them.

So things could be traced back to individuals, if they use the same address for everything, and post that address publicly.  But in most cases, there's no way to know who owns what.