Search content
Sort by

Showing 20 of 227 results by SapphireSpire
Post
Topic
Board Beginners & Help
Topic OP
The is the DAA's Clock/Calendar?
by
SapphireSpire
on 06/09/2025, 16:33:01 UTC
Every two weeks, the difficulty adjustment algorithm compares the number of blocks over the previous two weeks with the expected number of blocks, given a target block interval of ten minutes, and adjusts the difficulty up if there's too many blocks or down if there are too few blocks.

My question is, what clock/calendar does the algorithm depend on to determine block times?

Would the developers trust time stamps in the block headers, or internet time servers? AFAIK, the only reliable information in a block about the block time is the nonce count, but it's not possible to calculate the block time without also knowing the miner's hash rate (block_time = nonce_count / hash_rate).

In any case, the DAA assumes that the next 2016 blocks will be the same as the last 2016 when they never are and that's why actual block times are all over the place, anywhere between 0.3 seconds to 3 hours.
Post
Topic
Board Development & Technical Discussion
Re: Turn based mining eliminates the 51% attack
by
SapphireSpire
on 25/08/2025, 18:41:51 UTC
If a miner takes too long, more than a single block interval, the second address is free to compete with the first address in the next block interval. Each block interval is a turn. If they both fail to produce a block in the second turn, the third address on the list is free to compete in the third turn. The pattern continues until a block is found. When a block is found, the next turn goes to the address after the one who produced the last block. For example, if the second address finds a block in the third turn, the next turn goes to the third address.

Are you aware of (estimated) average time between block[1]? How do you handle the fact that each node may have slightly different date/time?
https://blockchair.com/bitcoin/charts/average-block-interval

Average time between blocks is just that, it approximates the target block interval.

Node clocks and calendars can differ by time zones, but even nodes in the same time zones can be a little fast or slow. Dedicated full-time miners tend to have the best internet connections and, with a good communications protocol, like QUIC, used by Solona, they can keep themselves synced in milliseconds. They will give each other a few seconds of tolerance, but they will know when blocks are too soon or too late.
Post
Topic
Board Development & Technical Discussion
Re: Turn based mining eliminates the 51% attack
by
SapphireSpire
on 24/08/2025, 18:17:29 UTC
I just generate more of my own addresses to keep adding.
Nobody is going to add your address to the list if you don't stake it first and win.
Post
Topic
Board Development & Technical Discussion
Merits 2 from 1 user
Topic OP
Turn based mining eliminates the 51% attack
by
SapphireSpire
on 24/08/2025, 17:57:15 UTC
⭐ Merited by Mia Chloe (2)
By 51% attack, I'm specifically referring to an attack in which an adversary with more than half the hash power of the network is able to double spend. No matter how high the hashrate is, the risk of a 51% is always there.

This turn-based mining eliminates the risk of a 51% attack because hash power no longer influences a miner's win rate. It introduces competition when necessary to deal with slow or non-responsive miners and employs POS to prevent spam. Here's the idea:

  • Every block contains a list of miner payment addresses.
  • Addresses are added to the bottom of the list from an address pool, similar to the mempool.
  • When a miner's address reaches the top of the list, they get the opportunity to mine a new block.

When a node wants to mine, they must stake their address to get it added to the pool. When the next block is mined, the miner modifies the list from the last block by including any new addresses from the pool to the bottom of the list and removing their address, plus any addresses above theirs, from the top of the list.

If a miner takes too long, more than a single block interval, the second address is free to compete with the first address in the next block interval. Each block interval is a turn. If they both fail to produce a block in the second turn, the third address on the list is free to compete in the third turn. The pattern continues until a block is found. When a block is found, the next turn goes to the address after the one who produced the last block. For example, if the second address finds a block in the third turn, the next turn goes to the third address.

Turn based mining helps decentralize mining by eliminating the incentives for pool mining. Pool staking will be a thing though, increasing demand for the coin. Mining difficulty is still determined by the block rate, but the block rate is more consistent, and the difficulty will normalize to average solo mining speed. It also saves a lot of energy and improves efficiency.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Can't Signatures Be Discarded After Confirmation?
by
SapphireSpire
on 26/07/2025, 12:57:49 UTC
⭐ Merited by stwenhao (1)
I'm not saying we stop checking signatures. Only that they not be include in blocks, to save space. Signatures are big and excluding them would save a lot of space. The next block is only valid if it respects original transaction data, which nodes still have in their mempools, so I don't see how miners can steal anything. New nodes, or nodes that have been offline for a while, would only need to trust that the tip of the blockchain is as good without signatures as it is now, which it would be.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: TXID+OGH for improved TPS
by
SapphireSpire
on 15/07/2025, 17:01:31 UTC
⭐ Merited by stwenhao (1)
You can do that now in your own node. Merkle tree is constructed in a way, where you can discard transaction data, and keep only transaction IDs, if you want to.

Also, if you want to keep track of the UTXO set, then you can keep just "txid:vout", to uniquely identify each unspent coin.
The Merkle tree permits pruning, but doesn't improve TPS. It requires you to download full transaction data before pruning which is a waste of bandwidth. And VOUT is just an index value. OGH is a checksum.
Post
Topic
Board Development & Technical Discussion
Merits 5 from 3 users
Topic OP
TXID+OGH for improved TPS
by
SapphireSpire
on 15/07/2025, 04:08:55 UTC
⭐ Merited by nutildah (2) ,vapourminer (2) ,stwenhao (1)
Transaction capacity per second (TPS) is block capacity divided by the block interval in seconds. The average size of a transaction is 350 bytes, so a 1 MIB block can contain 2,995 transactions and, with a block interval of 600 seconds:
  • a 1 MIB block supports 4, maybe 5 TPS
  • a 2 MIB block supports 9, maybe 10 TPS
  • a 4 MIB block supports 19, maybe 20 TPS

What if blocks only contain TXIDs instead of full transaction data? An attacker might exploit the lack of transaction data by fabricating a false transaction history and then, for each phony transaction, iterate through the output payment address like a nonce until it's TXID matches one in the block. To fix this, each TXID is paired with it's output group hash (OGH). An output group hash is generated from a transaction's group of outputs. An attacker can't match both the TXID and OGH, so the block references can safely be used to validate full transaction data, which is queried separately.

If the OGH is also 256 bits:
  • a 1 MIB block will support 27 TPS
  • a 2 MIB block will support 54 TPS
  • a 4 MIB block will support 109 TPS

Of course, it gets even better with a shorter block interval.
Post
Topic
Board Development & Technical Discussion
Re: Cosign Consensus
by
SapphireSpire
on 06/07/2025, 06:27:36 UTC
Your concept thus needs an additional mechanism to prevent the sybil attack.
I just needed to sort out the fee. Smiley
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Cosign Consensus
by
SapphireSpire
on 04/07/2025, 22:09:23 UTC
⭐ Merited by stwenhao (1)
when the attacker has a high enough number of nodes, he can trick other nodes into accepting the double spend transaction.
No, he can't. The minority of honest nodes, even if it's just one, will have valid copies of both transactions proving the double spend. And any nodes that don't report both transactions are obviously participating in a Sybil attack and would be added to a block list.
Post
Topic
Board Development & Technical Discussion
Re: Cosign Consensus
by
SapphireSpire
on 04/07/2025, 02:18:26 UTC
If an attacker manages to flood the mempool with so many of his own addresses that he ends up owning all of the addresses in the priority list of an output that he also owns, despite random selection, he can attempt to double spend it. If cosigns them before he publishes them, they will be invalid and not even propagate. He must publish his signed transactions before he can cosign them, so everyone can see them. As soon as he cosigns one, the others are invalidated. If he cosigns more than one, he merely locks up his coins until he resolves the conflict, either by redacting all but one transaction, or by using one of the other addresses on his list to cosign just one of them.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Cosign Consensus
by
SapphireSpire
on 03/07/2025, 00:32:53 UTC
⭐ Merited by stwenhao (1)
I see you updated the OP, @SapphireSpire. But I don't see how the changes really solve the problems pointed out.

Each output gets a big list with 100 to 1000 addresses, for redundancy. Once the sender broadcasts their transaction, their address is added to the pool so they can be selected as a cosigner by other users.
Again, there is no global mempool, so it's the nodes themselves who select the "big list of addresses". And that continues to be the problem here. There is no way to prove a transaction was broadcasted. All ways to achieve this can be sybil attacked (or involve traditional byzantine fault proof algorithms, like PBFT, or PoW or a similar mechanism).

The highest address on the list is valid.
What's the "highest address"? The address with the oldest output perhaps? Still the node of the attacker compiles the list, not some "global entity". So again -- they can "grind" through different configurations and eventually they'll reach a state where they can sign a double spend transaction.

What you would need is some interactive mechanism where you can ensure that really different nodes interact with each other which are not from the same entity. And there you'll probably need PoS or PoW to achieve this via incentives.

If a double spend attack involves multiple transactions with one input for the same UTXO, the fee goes to the highest cosigner on the list who correctly signs just one input. UTXOs are subject to change by this process until the top cosigner responds, or an output is spent.
I don't see how that changes the problem described above.

If a double spend attack involves multiple transactions, each with multiple inputs for the same set of UTXOs, the inputs should be listed by the size of the TXIDs of the UTXOs they reference in ascending order and cosigners must sign them in that order to avoid signing more than one transaction. The one who correctly signs the first input is the one who ultimately decides which of these transactions is valid.
Let's see how a double spend attack would be carried out in that protocol:

- Attacker pays to merchant.
- In that payment, the attacker ensures that he's the only one who co-signs the payment with one of his addresses. He can compile as many lists as he wants.
- Attacker gets the (virtual) good he wants to purchase, or if the victim is an exchange withdraws everything via cryptocurrency.
- Now the attacker builds a new transaction with the same UTXO. Again he ensures that the list only contains cosigning addresses of his own nodes. Again he can complile as many lists as he wants.
- The attacker deletes all remnants of the old transaction and its cosigners from his nodes. There may be other nodes, apart from the victim's node, still having the old transaction in their database. But the attacker simply tries to outnumber these nodes via a sybil attack, so the new transaction becomes part of the "consensus".

The sybil attack in the last step is the problem PoW (and at least partly PoS) solve, and that's what's missing in the protocol.

@stwenhao: If the protocol doesn't work on its own, then you're correct, the lineage back to genesis via PoW would then be the only protection. But in my post I referred to the (unlikely) situation that the protocol works.
Sybil attacks are used to subvert reputation and voting systems. The random selection of addresses into priority lists is not a reputation or voting system. As long as the opportunity to make the decision of "what transaction is valid" is randomly distributed among all addresses, and the network can never be stuck or fooled by any choice the decider makes, it doesn't really matter who makes the decision. So go ahead and grind out lists so you can flood the mempool with transactions that contain your addresses. Perhaps that's the work that's missing.
Post
Topic
Board Development & Technical Discussion
Re: Cosign Consensus
by
SapphireSpire
on 02/07/2025, 16:34:48 UTC
I just want to say thanks for all the insights and criticisms. In response, I have made substantial changes to my OP. Check it out.  Smiley
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Cosign Consensus
by
SapphireSpire
on 30/06/2025, 19:57:18 UTC
⭐ Merited by vjudeu (1)
1. How do you prove that the cosigners were chosen randomly?
By using a verifiable random function.

2. A typical transaction has one input and two outputs. This implies a shortage of inputs available to cosign. How do you ensure that cosigners are available for each input and inputs are available for each cosigner?
It's only typical for consumers to create transactions that split coins. Merchants typically create transactions that merge the many smaller coins created by consumer activity into one or a few large coins. That means there will be times when the number of inputs is less than the number of outputs, which will delay cosigners, and other times when the number of inputs is greater than the number of outputs, which will delay confirmations, but there will always be a natural average of 1:1.

3. An attack seems possible in which the attacker creates transactions but refuses to cosign their assigned inputs. I think this was already suggested, but your reply was not clear to me.
When a cosigner is nonresponsive, the sender must replace their address. They must redact the first or previous transaction before publishing a new one to avoid having more than one version of a transaction in the mempool at a time.
Post
Topic
Board Development & Technical Discussion
Merits 2 from 2 users
Re: Cosign Consensus
by
SapphireSpire
on 30/06/2025, 13:47:47 UTC
⭐ Merited by stwenhao (1) ,ABCbits (1)
If there are no fees, then what is the incentive to sign anything?
You have to cosign something before you can spend your coins.

that person can just sign fake transactions, or just reject to sign anything, and halt your network entirely, by making you unable to send any transaction at all.
Fake transactions are invalid, and nodes won't even propagate invalid transactions.

Option 1 is to sign nothing. When this happens, a sender can republish their transaction with a different address for the same UTXO. When that transaction is cosigned, the unsigned transaction is discarded.

Also, when your network will just start, then there will be just a few people interested in your system. And then, if you have for example 5 users, then it doesn't really matter, which one will be randomly picked.
There's at least one user per address, but there can be many addresses per user, so we can never know how many users there are. Only that the number of users is equal to or less than the number of addresses. In any case, it is highly unlikely for a new coin on an exchange to have such a small number of addresses for more than a fraction of a second.

There's no inflation because coins are not generated, so the coin supply is 100% premined.
Post
Topic
Board Development & Technical Discussion
Merits 15 from 5 users
Topic OP
Cosign Consensus
by
SapphireSpire
on 30/06/2025, 01:43:50 UTC
⭐ Merited by odolvlobo (10) ,ABCbits (2) ,stwenhao (1) ,d5000 (1) ,mcdouglasx (1)
(Not the same 'cosign' used in multisig) Transactions are confirmed individually as users randomly select each other to cosign their inputs. There are no fees, and there is no inflation because there is no blockchain, no blocks, and no coinbase transactions. It's fully decentralized, but there's no work or staking, so it has no measurable impact on the environment.

Before a sender publishes their transaction, they need to select a cosigner for each UTXO they're spending. Nobody can be trusted to select a cosigner at will, or they'll choose themselves or a friend, so they have to run a proof of randomness algorithm, PORA. The PORA selects a payment address at random from the pool of unconfirmed transactions. The owners of the selected addresses become the cosigners of the UTXOs being spent in the sender's transaction.

The outputs of a transaction can't be spent until the inputs are cosigned and each output address has been used to cosign an input in another transaction. Once the sender broadcasts their transaction, the first thing they want to do is cosign something, so they wait to be selected as a cosigner. Once selected, they have three options:
  • cosign nothing
  • cosign only one input, in one transaction, for the corresponding UTXO
  • cosign every input they see

Options 1 and 3 are a waste of time for the cosigner because they won't unlock their outputs or aid in double spend attacks, which are obvious to everyone at that point.

Option 2 is the only option that unlocks their coins, protects their wealth, and secures the network.
Post
Topic
Board Development & Technical Discussion
Re: A faster alternative for blockchain consensus
by
SapphireSpire
on 20/11/2024, 23:52:32 UTC
I have finally addressed the glaringly obvious oversight of my previous scheme.
Post
Topic
Board Development & Technical Discussion
Re: A faster alternative to blockchain consensus
by
SapphireSpire
on 14/11/2024, 02:43:57 UTC
after mining some blocks someone who has obtained some ticket private keys can construct an alternative chain which replaces the block they actually produced with an alternative one..

It's also not clear to me why you think what you propose is faster.
Because the work happens before the block is created. It eliminates the block interval and reduces confirmation time to a few seconds; the time it takes a block message to cross the network. Why does block creation need to be slow and predictable?
Post
Topic
Board Development & Technical Discussion
Re: A faster alternative to blockchain consensus
by
SapphireSpire
on 13/11/2024, 02:33:42 UTC
I have greatly altered the original idea.
Post
Topic
Board Development & Technical Discussion
Merits 2 from 2 users
Topic OP
An alternative to blockchain consensus
by
SapphireSpire
on 04/11/2024, 18:00:00 UTC
⭐ Merited by garlonicon (1) ,vapourminer (1)
Ideally, we want to tie every output to the input that spends it, so it can't be double-spent. But this can't be done before the input exists, so the output must be tied to an intermediary, which gets tied to the input when it's created. This introduces a degree of trust, and a degree of risk.

In this blockless consensus scheme, the intermediary is a ticket. Tickets are created by mediators, who serve the same role as miners. A ticket contains the txid and index of the target output, a confirmation public key, and a payment address. The mediator use a double-key algorithm to create their confirmation pubkey, similar to the one used to create a payment address. They have to scan over a range of private keys until they find one that produces a public key that contains at least one leading zero. They publish their ticket as proof, which everyone adds to their ticket pool.

When the owner of an unspent output creates a transaction to spend it, they must include, in each input, the most difficult pubkey available in the ticket pool for the corresponding output, and create a coinbase output with each mediator's payment address. These input/coinbase pairs always appear first in the transaction, with the same index values, and are sorted by difficulty in ascending order. The owner must sign each input independent of the others, in case any of them need to be replaced. After the owner publishes the transaction, the mediators confirm it by signing the inputs that contain their pubkeys. All but the last mediator signs their input independently. The last mediator signs everything. To minimize network traffic, inputs are signed in the order of their index values.

There's no limit to the number of tickets per unspent output, but an input can only have one unique confirmation pubkey, so pubkeys can't be used more than once, and cannot coexist in multiple tickets at once. There's no limit to the difficulty of a confirmation pubkey, so mediators can constantly work to upgrade their tickets with more difficult pubkeys. The effort required to upgrade a pubkey increases exponentially with it's difficulty, so the coinbase reward should scale exponentially as the zero count increases linearly. After an output is spent, all unused pubkeys can be presented in new tickets for other unspent outputs, so the work isn't wasted.

In the absence of a consensus protocol, a p2p network has no method of resolving double spend attacks consistently, by always accepting the same transaction. A simple decision is all the network requires. A miner performs this function by choosing which transaction to include in their block, while the mediator chooses which transaction to sign. The question is whether a signature is as secure as a block hash. Just as a block hash is a checksum on the block and every block before it. the last signature in a transaction is a checksum on the transaction, and every input is chained to a previous transaction. But a signature is still no iron-clad guarantee of the absence of a double-spend transaction, and neither is a block hash. Just as a malicious mediator can sign more than one input at a time, a malicious minor can solve more than one block at a time. With a good rule set that covers all possible outcomes, a network using either protocol can mitigate the misbehavior, but there is always a risk that a payee can be fooled. The only real difference between a signature and a block hash is the speed. A signature takes milliseconds, and only delays a single transaction, while a block hash can take hours, and delays everything.

Usually, all a mediator must do is sign the first input they see that contains their pubkey. If they sign more than one input for the same output, their ticket is destroyed because it can never happen by accident. If a mediator fails to sign their input within a reasonable time, a few seconds perhaps, the owner can simply replace the ticket for that input. If an attacker uses different tickets in each double-spend transaction, the pubkey with the lowest difficulty is discarded. This scenario might also happen unintentionally when a ticket is replaced. The more inputs a transaction has, the longer it takes to get confirmed because the more it has to travel around the network. If multiple transactions contain multiple double-spends, each with different tickets, the last mediator's should see all transactions before they have the opportunity to sign one, and they should sign only if the transaction that contains their pubkey also has the most combined difficulty. If a mediator sees a transaction that contains a weaker pubkey for the same output they have a stronger pubkey for, it serves as advanced warning.
Post
Topic
Board Development & Technical Discussion
Topic OP
Does the block hash function need to be cryptographically secure?
by
SapphireSpire
on 15/01/2024, 19:18:54 UTC
Cryptographically secure hash functions are irreversible so that ciphertexts can't be decrypted by running the function in reverse. But none of the data in a block is secret, so the block hash is just a checksum, and shouldn't need to be cryptographically secure. Am I wrong?