Search content
Sort by

Showing 20 of 271 results by tl121
Post
Topic
Board Hardware
Re: ANTMINER S7 is available at bitmaintech.com with 4.86TH/s, 0.25J/GH
by
tl121
on 23/03/2016, 17:12:07 UTC

55% manual drops speed down to less then killer sound maybe 3300 - 3500 rpm

35% manual gets you to 2600 - 2800   but  you must lower freq.

Depends on particular unit (possibly batch).  Depends on room temperature.  Depends on air flow in room.  I run all my fans at 35% and this is OK in my room provided that the temperature in the room stays below 30 degrees C, even on units that are slightly overclocked.
Post
Topic
Board Hardware
Re: ANTMINER S7 is available at bitmaintech.com with 4.86TH/s, 0.25J/GH
by
tl121
on 09/03/2016, 17:08:07 UTC
last night my minner stoped connection so i went down to the basement  and saw it working but it had no connection to the rooter the i cut the power and gave it back and everything worked ok . does somebody know why has that happened?

This has happened to me too, several different units.  On one occasion it was correlated with a temporary loss of internet connection.  This affected one unit, but the others worked OK.  The symptoms were that the O/S had crashed.  The miner web interface did not respond, nor would it respond to a ping at its IP address. The only cure was to power cycle.
Post
Topic
Board Pools
Re: [40+ PH] SlushPool (slushpool.com); World's First Mining Pool
by
tl121
on 22/02/2016, 15:04:24 UTC
If you believe Slush's explanation, which I do, then it is seems inappropriate to refer to this situation a "block withholding attack".  A better term would be "block withholding bug". 

As to people who believe that Slush "owes" them something.  We all had access to the same information and had the opportunity to move our hash power months ago, once it was clear that something was wrong.  I stopped mining at the pool some months ago, once it was obvious that the bad "luck" was almost certainly not luck.
Post
Topic
Board Bitcoin Discussion
Re: Blocks are full.
by
tl121
on 23/01/2016, 20:40:57 UTC

OK, so this isn't principally  about 1.6-2.5MB vs 2MB or HF vs SF  but a change in governance?

To me it is.  I want decentralized development.
 

This doesn't make any sense ... Bitcoin Classic model of development is more centralized right now than cores. Their is no decentralized governance model established yet , and to assume there will be in the future is hopeful to put it kindly....

I just think its a bit premature to get behind something that is currently setup under the benevolent dictator model. Wouldn't it make sense to first draft a governance model , than promote it ?

I DON'T want Blockstream (a private company)

Consider.it and toom.im aren't private companies owned by 2 brothers as well? You don't find any conflict of interest there?

Decentralized development doesn't make sense to you ?

Not sure why.  Makes perfect sense to me.





Better than "best" would be completely separate implementations with separate code bases, preferably in separate programming languages.  Separate teams evolving a common code base continues the risks of monoculture.

Post
Topic
Board Bitcoin Discussion
Re: Blocks are full.
by
tl121
on 22/01/2016, 21:30:12 UTC
If we start raising the block size it will be like that in a way: tomorrow 2MB, in one week 4MB, you understand the trick.


Simple and straight to the point. The end result would be simply that users on slow networks will be unable to catch up.

At least there will be something to catch up to and a reason to upgrade one's infrastructure if one needs to run a full node or wants to watch Netflix.

Post
Topic
Board Bitcoin Discussion
Re: Analysis and list of top big blocks shills (XT #REKT ignorers)
by
tl121
on 12/01/2016, 19:04:18 UTC
Mining nodes are all already in data centres. We are already far past this point, so I would not consider that to be a good reason not to increase the blocksize. Miners can not "raise the fee" they simply just choose what transactions to include and not to include, collectively this creates a free market for fees. With an arbitrarily small block size limit it has more in common with a centrally planned economy.

This is back to Peter Todd's famous question: If it is already centralized then why make it worse

The relay network that miners are using right now are a perfect example of now we are relying on private company to provide the bitcoin network necessary service. Following this route, in future all the mining nodes will operate on a private company's network, so that a couple of phone call can shut them down right away

Small block size does not preventing you from inventing fee-free transaction services off-chain. In fact, limited at 1MB or limited at 8MB is the same effect
because bitcoin is never going to scale indefinitely. So, if you sooner or later have to limit the block size, then why not do it now when bitcoin core software is still relatively light weight. It is the direction that matters, not parameters




You can be damn sure if this private company started doing something the miners didn't like it would be replaced, probably within one or two days.  As I understand it, the code is all open source and it's just a matter of running similar code at new data centers and then reconfiguring some IP addresses in .conf files.
Post
Topic
Board Bitcoin Discussion
Re: Analysis and list of top big blocks shills (XT #REKT ignorers)
by
tl121
on 11/01/2016, 18:26:42 UTC
This is what I was referencing (such blocks might occur):


You need at least two known values to be able to extrapolate unknown values.

This guenie slide uses just one value, 1 MB with 30 seconds as the latest time any node process a block today. Then just says lets extrapolate unknown values 3 MB, 8 MB with function O(n^2).

This is bad science, you need at least two known values like 0.5 MB and 1 MB with corresponding propagation time to all nodes in order to create estimation of the function for any unknown values like 3 MB or 8 MB. Obviously the more know values you have (over 2), the more precise function you can create and the more reliable extreme far unknown values like 8 MB becomes.

But the slide is big fail with just one 1 MB known value and presented function which comes not from fit with at least two known values but just from author willd guess, thus the obvious and eye catching absurd 3 MB and 8 MB propagation time predictions.

The example represents a severe performance bug in the bitcoin core software and should be fixed by software changes that make an inefficient operation efficient or restrict very large transactions if this proves too difficult. There is no reason to limit blocksize because of this.
Post
Topic
Board Bitcoin Discussion
Re: Analysis and list of top big blocks shills (XT #REKT ignorers)
by
tl121
on 11/01/2016, 18:21:18 UTC

As far as the numbers, there's 8 bits in one byte.  Therefore, a difference of 8 megabits in speed is one megabyte per second.
If block size is 8 MB, that's 8 seconds.  Yet it takes 10 minutes to solve a block so 8/600.


Each node connects to 8 nodes, and each of these 8 nodes connect to another 8 nodes, and so on. But some of these connections are duplicated, so it will take several hop before a block is relayed to majority of the nodes, 4-5 hops maybe. When you have 8 seconds for a block transfer and the block verify time of 8 seconds, the nodes on the far end of the network would receive it in 80 seconds, which is a significant delay

Let's first imagine the ideal bitcoin blockchain done by aliens: It takes 1 second to receive and verify each block, takes 1MB hard drive space, and can carry unlimited amount of transactions in 10 minutes  Cheesy  Then you add those real world limitations on it and see which part you can compromise

An 80 second delay is significant only if you are mining, as it would represent a significant orphan risk.  This is already true with 0.5 MB blocks for slow internet service, such as the slow upload rate on typical DSL connections.  Most miners who don't have excellent internet connectivity do their mining via mining pools that are well connected and need minimal bandwidth for this purpose, independent of the block size.
Post
Topic
Board Bitcoin Discussion
Re: Analysis and list of top big blocks shills (XT #REKT ignorers)
by
tl121
on 10/01/2016, 19:09:47 UTC

I would think you'll send your received blocks to more than 1 other node.

The number of blocks nodes received averages out to be approximately the number of blocks these nodes send. "Number of takeoffs approximately equals number of landings." If you have a well connected node with lots of bandwidth then it's possible you will send out more data than you receive, but that is unlikely to happen if you have limited bandwidth.  Thus the network average remains 1 to 1 (except for new nodes).

If a new node starts up then it will have to receive each block once.  If an incompetently run node keeps crashing and losing the entire block database and has no backup then this will happen multiple times.   This part of the problem can easily be fixed by nodes with limited upstream bandwidth deprioritizing transmission of older blocks during time of congestion. (One of many possible network optimizations that can and will appear should they be needed.)


Post
Topic
Board Bitcoin Discussion
Re: Is Google supercomputer a threat?
by
tl121
on 02/01/2016, 19:14:47 UTC
Google's computer is a D-wave adiabatic quantum computer.  It is not a general purpose quantum computer and can not be used to run variants of Shor's algorithm or Grover's algorithm.  This means that it poses no threat to bitcoin, either to signatures or hashes (including proof of work).
Post
Topic
Board Development & Technical Discussion
Re: How to counter Ram Scapers?
by
tl121
on 01/01/2016, 21:06:28 UTC
Three words : Hardware Security Modules

The keys are stored on an external device and never leave this device.

Every time the private key is needed to sign something like a transaction for example the transaction is sent to the HSM and the signed transaction comes back.

I suspect this kind of thing will become more popular in Bitcoin.

HSM's can generally also be used to encrypt and sign other messages like emails, etc.

They're not in mainstream use right now. Not yet anyway. This is going to change in 2016.

The point is the key is never disclosed to the computer so it never enters the systems RAM, ever.

There's a little linux USB computer named the 'USB Armory' which could be used to create something like this.

Two words:  hardware wallet.
One word:  Trezor
Post
Topic
Board Development & Technical Discussion
Re: 0.6% of the nodes accept blocks > 1 MB **TODAY** No need to wait for BIP101
by
tl121
on 01/01/2016, 21:03:59 UTC
If you are running a Bitcoin Unlimited node, be sure and check that it is open and being counted by the Bitnodes snapshot.  https://bitnodes.21.co/nodes/?q=/BitcoinUnlimited:0.11.2/

You might check this page periodically to make sure that your node hasn't gone missing for some reason, particularly if you've had to restart it.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin XT - Officially #REKT (also goes for BIP101 fraud)
by
tl121
on 01/01/2016, 19:41:23 UTC
Bitfury's paper here:

http://bitfury.com/content/4-white-papers-research/block-size-1.1.1.pdf

"The table contains an estimate of how many full nodes would no longer function without hard-ware upgrades as average block size is increased. These estimates are based on the assumption that many users run full nodes on consumer-grade hardware, whether on personal computers or in the
cloud. Characteristics of node hardware are based on a survey performed by Steam [19]; we assume PC gamers and Bitcoin enthusiasts have a similar amount of resources dedicated to their hardware.

The exception is RAM: we assume that a typical computer supporting a node has no less than 3 GB RAM as a node requires at least 2GB RAM to run with margin[15]. For example,if block size increases to 2 MB, a node would need to dedicate 8 GB RAM to the Bitcoin client, while more than a half of PCs
in the survey have less RAM."

Based on his estimation, raise the block size to 4MB will drop 75% of nodes from the network



The 8 GB RAM module for the computer that runs my Bitcoin Unlimited node has 8 GB of RAM.  I paid $67 for this memory module 13 months ago.  I note that Amazon is selling the same module today for $35 USD.  It is unreasonable to cripple bitcoin to support users running obsolete hardware.


Let's compare the cost for 4MB blocks:

1. You spend several hundred dollars on hardware and make a new node dedicated to bitcoin (many gaming machines do not support 16GB memory, thus you need to upgrade pretty much the whole machine), in hope of maintaining the $0.05 fee for bitcoin transactions (and it requires thousands of other nodes also do the same as you)

2. You use those several hundred dollars to pay the transaction fee (should be enough for at least one hundred transactions even the fee rose 100x to $5 per transaction)

Notice that setting up a full node does not benefit the node operator in anyway, and raise the block size will require thousands of such voluntarily setup nodes to upgrade. So I guess any rational human would refuse the node upgrade and pay the fee instead. I guess even the fee has risen to a prohibitive level, average user would still pay fee instead of setting up full nodes using dedicated hardware



I am running a node because doing so is the only way to see how bitcoin actually works. I am not running it to help the network in any way, nor somehow magically keep the fees low so I will pay less.  It could be argued as a consequence of my low upstream bandwidth I am actually "hurting" the network by virtue of downloading more that I upload ("leaching").  I do get the benefit of greater security and privacy from running the node.  Also, the same machine is running other servers, and I would probably be running it 24/7 even if I stopped running a bitcoin node.  I have some older machines that I could use for servers, but they consume enough electricity running 24/7 that the newer machine has already paid for itself.

Were I to pay exorbitant transaction fees I would be doing nothing to help the growth of bitcoin.  It would be a foolish decision, because the value of the bitcoins I hold would certainly go down.  Of course, if I were astute, I might pay one large transaction fee and dump all of my bitcoins, shut down my node, and stop wasting my time replying to trolls and sock puppets.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin XT - Officially #REKT (also goes for BIP101 fraud)
by
tl121
on 31/12/2015, 19:50:32 UTC
Bitfury's paper here:

http://bitfury.com/content/4-white-papers-research/block-size-1.1.1.pdf

"The table contains an estimate of how many full nodes would no longer function without hard-ware upgrades as average block size is increased. These estimates are based on the assumption that many users run full nodes on consumer-grade hardware, whether on personal computers or in the
cloud. Characteristics of node hardware are based on a survey performed by Steam [19]; we assume PC gamers and Bitcoin enthusiasts have a similar amount of resources dedicated to their hardware.

The exception is RAM: we assume that a typical computer supporting a node has no less than 3 GB RAM as a node requires at least 2GB RAM to run with margin[15]. For example,if block size increases to 2 MB, a node would need to dedicate 8 GB RAM to the Bitcoin client, while more than a half of PCs
in the survey have less RAM."

Based on his estimation, raise the block size to 4MB will drop 75% of nodes from the network



The 8 GB RAM module for the computer that runs my Bitcoin Unlimited node has 8 GB of RAM.  I paid $67 for this memory module 13 months ago.  I note that Amazon is selling the same module today for $35 USD.  It is unreasonable to cripple bitcoin to support users running obsolete hardware.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin XT - Officially #REKT (also goes for BIP101 fraud)
by
tl121
on 31/12/2015, 19:35:36 UTC
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer
I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.
The primary limiting technological factor for blocksize today is bandwidth and latency.

Just a nit, but there is a subtle point. The primary limiting technological factor for transaction rate is bandwidth and latency.  (This changes the wording, so as to eliminate counting games such as Seg W.)
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin XT - Officially #REKT (also goes for BIP101 fraud)
by
tl121
on 30/12/2015, 23:02:44 UTC

It is very misleading claim that bitcoin users do not need to trust centralized authority, in fact every one uses bitcoin is trusting this centralized protocol originally designed by Satoshi: Every miners, nodes, exchanges, merchants, users, no exception

One does not have to "trust" a protocol.  Actually, in the case of bitcoin, there is no protocol separate from some version of bitcoin software.  One can inspect that software and see in detail what it does and how it does it.  One can build the software from source code if one doesn't trust the downloadable binaries. If one lacks the necessary skills to understand the source code one can hire people to do review on your behalf. 

Now contrast this situation with you you face as the user of the fiat banking system, or a fractional reserve bitcoin exchange such as Mt. Gox.
Post
Topic
Board Bitcoin Discussion
Re: Did Gmaxwell stop working on Bitcoin?
by
tl121
on 27/12/2015, 03:14:01 UTC

Maxwell has claimed that he has experience scaling large systems.  He used to work at Juniper. Presumably he worked on some cryptographic aspects of their products as well.  Draw your own conclusions.
Post
Topic
Board Bitcoin Discussion
Re: Capacity increases for the Bitcoin system
by
tl121
on 23/12/2015, 21:12:45 UTC
still not sure about segregated witness, if it not carry any risk, why satoshi has not think about it since the beginning, when he limited the block to 1?

i welcome the synching speed up, this is needed for those that have not strong machine, not my case, as i can sync and open the client in 20 seconds..

Yes, it will be great to be able to open the client faster, in my case, it is very much needed, because my hardware is 5 years old now. About the segregated witness feature, im not sure but if it only has an impact on light clients, it's still important because consider that most people aren't going to be running full nodes anyway, so improvements on this department are welcome.

The code speedup is a useful improvement.  However, it does not increase the capacity of the bitcoin system since the capacity is limited artificially by the 1 MB blocksize limit. 

Segregated witness does nothing for light clients.  It may increase the network capacity in a complex way, because it changes what counts in the 1 MB block limit.  It does not reduce bandwidth usage at all, so it amounts to a complex way of partially bypassing the 1 MB block limit.  The same effect could be achieved with much less effort by a simple increase in the block size, say to 2 MB.  Segregated witness also requires all of the clients to be updated in a non-trivial way, including SPV clients which don't care about the block limit. 

What you are seeing is a political game being played by bitcoin core to make it look like they are doing something.  For someone who knows how bitcoin works they are not doing good engineering.  In particular, they are not following the KISS principle. (Keep It Simple Stupid.  Don't do something complex when a simple change can accomplish the same thing.)


We can't raise the blocksize too much because it would compromise decentralization of nodes. For example, for me downloading the blockchain is a pain in the ass. If the blockchain gets bigger im just not going to deal with running a node. Slowly less and less people with care and we distribution of nodes will be centralized, which banks and governments would love.
But yes 2 MB seems like a good compromise, even tho it solves shit nothing in the long run tho.. in the long run we aren't scaling up without something like the LN anyway.

Downloading the blockchain was slow until a few months ago, when a change in networking software provided a big speedup, making verification the limit on most computers.  In the pipeline is faster software for validating signatures, which should provide another large speed up.  As the size of the blockchain expands (as it will with time unless Bitcoin dies) there will be more pressure onto software improvements to speed things up.  This pressure will be maximized if the block limit is raised or removed.  Finally, it is possible to prune the block chain and this is in the process of being phased in.

Post
Topic
Board Bitcoin Discussion
Re: Capacity increases for the Bitcoin system
by
tl121
on 22/12/2015, 23:26:31 UTC
still not sure about segregated witness, if it not carry any risk, why satoshi has not think about it since the beginning, when he limited the block to 1?

i welcome the synching speed up, this is needed for those that have not strong machine, not my case, as i can sync and open the client in 20 seconds..

Yes, it will be great to be able to open the client faster, in my case, it is very much needed, because my hardware is 5 years old now. About the segregated witness feature, im not sure but if it only has an impact on light clients, it's still important because consider that most people aren't going to be running full nodes anyway, so improvements on this department are welcome.

The code speedup is a useful improvement.  However, it does not increase the capacity of the bitcoin system since the capacity is limited artificially by the 1 MB blocksize limit. 

Segregated witness does nothing for light clients.  It may increase the network capacity in a complex way, because it changes what counts in the 1 MB block limit.  It does not reduce bandwidth usage at all, so it amounts to a complex way of partially bypassing the 1 MB block limit.  The same effect could be achieved with much less effort by a simple increase in the block size, say to 2 MB.  Segregated witness also requires all of the clients to be updated in a non-trivial way, including SPV clients which don't care about the block limit. 

What you are seeing is a political game being played by bitcoin core to make it look like they are doing something.  For someone who knows how bitcoin works they are not doing good engineering.  In particular, they are not following the KISS principle. (Keep It Simple Stupid.  Don't do something complex when a simple change can accomplish the same thing.)
Post
Topic
Board Bitcoin Discussion
Re: Capacity increases for the Bitcoin system
by
tl121
on 22/12/2015, 22:42:08 UTC
Beware of propaganda.  There is controversy regarding these changes.  There are other internet sources where you can see other viewpoints on this issue, such as r/btc.