Search content
Sort by

Showing 20 of 223 results by SapphireSpire
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Can't Signatures Be Discarded After Confirmation?
by
SapphireSpire
on 26/07/2025, 12:57:49 UTC
⭐ Merited by stwenhao (1)
I'm not saying we stop checking signatures. Only that they not be include in blocks, to save space. Signatures are big and excluding them would save a lot of space. The next block is only valid if it respects original transaction data, which nodes still have in their mempools, so I don't see how miners can steal anything. New nodes, or nodes that have been offline for a while, would only need to trust that the tip of the blockchain is as good without signatures as it is now, which it would be.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: TXID+OGH for improved TPS
by
SapphireSpire
on 15/07/2025, 17:01:31 UTC
⭐ Merited by stwenhao (1)
You can do that now in your own node. Merkle tree is constructed in a way, where you can discard transaction data, and keep only transaction IDs, if you want to.

Also, if you want to keep track of the UTXO set, then you can keep just "txid:vout", to uniquely identify each unspent coin.
The Merkle tree permits pruning, but doesn't improve TPS. It requires you to download full transaction data before pruning which is a waste of bandwidth. And VOUT is just an index value. OGH is a checksum.
Post
Topic
Board Development & Technical Discussion
Merits 5 from 3 users
Topic OP
TXID+OGH for improved TPS
by
SapphireSpire
on 15/07/2025, 04:08:55 UTC
⭐ Merited by nutildah (2) ,vapourminer (2) ,stwenhao (1)
Transaction capacity per second (TPS) is block capacity divided by the block interval in seconds. The average size of a transaction is 350 bytes, so a 1 MIB block can contain 2,995 transactions and, with a block interval of 600 seconds:
  • a 1 MIB block supports 4, maybe 5 TPS
  • a 2 MIB block supports 9, maybe 10 TPS
  • a 4 MIB block supports 19, maybe 20 TPS

What if blocks only contain TXIDs instead of full transaction data? An attacker might exploit the lack of transaction data by fabricating a false transaction history and then, for each phony transaction, iterate through the output payment address like a nonce until it's TXID matches one in the block. To fix this, each TXID is paired with it's output group hash (OGH). An output group hash is generated from a transaction's group of outputs. An attacker can't match both the TXID and OGH, so the block references can safely be used to validate full transaction data, which is queried separately.

If the OGH is also 256 bits:
  • a 1 MIB block will support 27 TPS
  • a 2 MIB block will support 54 TPS
  • a 4 MIB block will support 109 TPS

Of course, it gets even better with a shorter block interval.
Post
Topic
Board Development & Technical Discussion
Re: Cosign Consensus
by
SapphireSpire
on 06/07/2025, 06:27:36 UTC
Your concept thus needs an additional mechanism to prevent the sybil attack.
I just needed to sort out the fee. Smiley
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Cosign Consensus
by
SapphireSpire
on 04/07/2025, 22:09:23 UTC
⭐ Merited by stwenhao (1)
when the attacker has a high enough number of nodes, he can trick other nodes into accepting the double spend transaction.
No, he can't. The minority of honest nodes, even if it's just one, will have valid copies of both transactions proving the double spend. And any nodes that don't report both transactions are obviously participating in a Sybil attack and would be added to a block list.
Post
Topic
Board Development & Technical Discussion
Re: Cosign Consensus
by
SapphireSpire
on 04/07/2025, 02:18:26 UTC
If an attacker manages to flood the mempool with so many of his own addresses that he ends up owning all of the addresses in the priority list of an output that he also owns, despite random selection, he can attempt to double spend it. If cosigns them before he publishes them, they will be invalid and not even propagate. He must publish his signed transactions before he can cosign them, so everyone can see them. As soon as he cosigns one, the others are invalidated. If he cosigns more than one, he merely locks up his coins until he resolves the conflict, either by redacting all but one transaction, or by using one of the other addresses on his list to cosign just one of them.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Cosign Consensus
by
SapphireSpire
on 03/07/2025, 00:32:53 UTC
⭐ Merited by stwenhao (1)
I see you updated the OP, @SapphireSpire. But I don't see how the changes really solve the problems pointed out.

Each output gets a big list with 100 to 1000 addresses, for redundancy. Once the sender broadcasts their transaction, their address is added to the pool so they can be selected as a cosigner by other users.
Again, there is no global mempool, so it's the nodes themselves who select the "big list of addresses". And that continues to be the problem here. There is no way to prove a transaction was broadcasted. All ways to achieve this can be sybil attacked (or involve traditional byzantine fault proof algorithms, like PBFT, or PoW or a similar mechanism).

The highest address on the list is valid.
What's the "highest address"? The address with the oldest output perhaps? Still the node of the attacker compiles the list, not some "global entity". So again -- they can "grind" through different configurations and eventually they'll reach a state where they can sign a double spend transaction.

What you would need is some interactive mechanism where you can ensure that really different nodes interact with each other which are not from the same entity. And there you'll probably need PoS or PoW to achieve this via incentives.

If a double spend attack involves multiple transactions with one input for the same UTXO, the fee goes to the highest cosigner on the list who correctly signs just one input. UTXOs are subject to change by this process until the top cosigner responds, or an output is spent.
I don't see how that changes the problem described above.

If a double spend attack involves multiple transactions, each with multiple inputs for the same set of UTXOs, the inputs should be listed by the size of the TXIDs of the UTXOs they reference in ascending order and cosigners must sign them in that order to avoid signing more than one transaction. The one who correctly signs the first input is the one who ultimately decides which of these transactions is valid.
Let's see how a double spend attack would be carried out in that protocol:

- Attacker pays to merchant.
- In that payment, the attacker ensures that he's the only one who co-signs the payment with one of his addresses. He can compile as many lists as he wants.
- Attacker gets the (virtual) good he wants to purchase, or if the victim is an exchange withdraws everything via cryptocurrency.
- Now the attacker builds a new transaction with the same UTXO. Again he ensures that the list only contains cosigning addresses of his own nodes. Again he can complile as many lists as he wants.
- The attacker deletes all remnants of the old transaction and its cosigners from his nodes. There may be other nodes, apart from the victim's node, still having the old transaction in their database. But the attacker simply tries to outnumber these nodes via a sybil attack, so the new transaction becomes part of the "consensus".

The sybil attack in the last step is the problem PoW (and at least partly PoS) solve, and that's what's missing in the protocol.

@stwenhao: If the protocol doesn't work on its own, then you're correct, the lineage back to genesis via PoW would then be the only protection. But in my post I referred to the (unlikely) situation that the protocol works.
Sybil attacks are used to subvert reputation and voting systems. The random selection of addresses into priority lists is not a reputation or voting system. As long as the opportunity to make the decision of "what transaction is valid" is randomly distributed among all addresses, and the network can never be stuck or fooled by any choice the decider makes, it doesn't really matter who makes the decision. So go ahead and grind out lists so you can flood the mempool with transactions that contain your addresses. Perhaps that's the work that's missing.
Post
Topic
Board Development & Technical Discussion
Re: Cosign Consensus
by
SapphireSpire
on 02/07/2025, 16:34:48 UTC
I just want to say thanks for all the insights and criticisms. In response, I have made substantial changes to my OP. Check it out.  Smiley
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Cosign Consensus
by
SapphireSpire
on 30/06/2025, 19:57:18 UTC
⭐ Merited by vjudeu (1)
1. How do you prove that the cosigners were chosen randomly?
By using a verifiable random function.

2. A typical transaction has one input and two outputs. This implies a shortage of inputs available to cosign. How do you ensure that cosigners are available for each input and inputs are available for each cosigner?
It's only typical for consumers to create transactions that split coins. Merchants typically create transactions that merge the many smaller coins created by consumer activity into one or a few large coins. That means there will be times when the number of inputs is less than the number of outputs, which will delay cosigners, and other times when the number of inputs is greater than the number of outputs, which will delay confirmations, but there will always be a natural average of 1:1.

3. An attack seems possible in which the attacker creates transactions but refuses to cosign their assigned inputs. I think this was already suggested, but your reply was not clear to me.
When a cosigner is nonresponsive, the sender must replace their address. They must redact the first or previous transaction before publishing a new one to avoid having more than one version of a transaction in the mempool at a time.
Post
Topic
Board Development & Technical Discussion
Merits 2 from 2 users
Re: Cosign Consensus
by
SapphireSpire
on 30/06/2025, 13:47:47 UTC
⭐ Merited by stwenhao (1) ,ABCbits (1)
If there are no fees, then what is the incentive to sign anything?
You have to cosign something before you can spend your coins.

that person can just sign fake transactions, or just reject to sign anything, and halt your network entirely, by making you unable to send any transaction at all.
Fake transactions are invalid, and nodes won't even propagate invalid transactions.

Option 1 is to sign nothing. When this happens, a sender can republish their transaction with a different address for the same UTXO. When that transaction is cosigned, the unsigned transaction is discarded.

Also, when your network will just start, then there will be just a few people interested in your system. And then, if you have for example 5 users, then it doesn't really matter, which one will be randomly picked.
There's at least one user per address, but there can be many addresses per user, so we can never know how many users there are. Only that the number of users is equal to or less than the number of addresses. In any case, it is highly unlikely for a new coin on an exchange to have such a small number of addresses for more than a fraction of a second.

There's no inflation because coins are not generated, so the coin supply is 100% premined.
Post
Topic
Board Development & Technical Discussion
Merits 15 from 5 users
Topic OP
Cosign Consensus
by
SapphireSpire
on 30/06/2025, 01:43:50 UTC
⭐ Merited by odolvlobo (10) ,ABCbits (2) ,stwenhao (1) ,d5000 (1) ,mcdouglasx (1)
(Not the same 'cosign' used in multisig) Transactions are confirmed individually as users randomly select each other to cosign their inputs. There are no fees, and there is no inflation because there is no blockchain, no blocks, and no coinbase transactions. It's fully decentralized, but there's no work or staking, so it has no measurable impact on the environment.

Before a sender publishes their transaction, they need to select a cosigner for each UTXO they're spending. Nobody can be trusted to select a cosigner at will, or they'll choose themselves or a friend, so they have to run a proof of randomness algorithm, PORA. The PORA selects a payment address at random from the pool of unconfirmed transactions. The owners of the selected addresses become the cosigners of the UTXOs being spent in the sender's transaction.

The outputs of a transaction can't be spent until the inputs are cosigned and each output address has been used to cosign an input in another transaction. Once the sender broadcasts their transaction, the first thing they want to do is cosign something, so they wait to be selected as a cosigner. Once selected, they have three options:
  • cosign nothing
  • cosign only one input, in one transaction, for the corresponding UTXO
  • cosign every input they see

Options 1 and 3 are a waste of time for the cosigner because they won't unlock their outputs or aid in double spend attacks, which are obvious to everyone at that point.

Option 2 is the only option that unlocks their coins, protects their wealth, and secures the network.
Post
Topic
Board Development & Technical Discussion
Re: A faster alternative for blockchain consensus
by
SapphireSpire
on 20/11/2024, 23:52:32 UTC
I have finally addressed the glaringly obvious oversight of my previous scheme.
Post
Topic
Board Development & Technical Discussion
Re: A faster alternative to blockchain consensus
by
SapphireSpire
on 14/11/2024, 02:43:57 UTC
after mining some blocks someone who has obtained some ticket private keys can construct an alternative chain which replaces the block they actually produced with an alternative one..

It's also not clear to me why you think what you propose is faster.
Because the work happens before the block is created. It eliminates the block interval and reduces confirmation time to a few seconds; the time it takes a block message to cross the network. Why does block creation need to be slow and predictable?
Post
Topic
Board Development & Technical Discussion
Re: A faster alternative to blockchain consensus
by
SapphireSpire
on 13/11/2024, 02:33:42 UTC
I have greatly altered the original idea.
Post
Topic
Board Development & Technical Discussion
Merits 2 from 2 users
Topic OP
An alternative to blockchain consensus
by
SapphireSpire
on 04/11/2024, 18:00:00 UTC
⭐ Merited by garlonicon (1) ,vapourminer (1)
Ideally, we want to tie every output to the input that spends it, so it can't be double-spent. But this can't be done before the input exists, so the output must be tied to an intermediary, which gets tied to the input when it's created. This introduces a degree of trust, and a degree of risk.

In this blockless consensus scheme, the intermediary is a ticket. Tickets are created by mediators, who serve the same role as miners. A ticket contains the txid and index of the target output, a confirmation public key, and a payment address. The mediator use a double-key algorithm to create their confirmation pubkey, similar to the one used to create a payment address. They have to scan over a range of private keys until they find one that produces a public key that contains at least one leading zero. They publish their ticket as proof, which everyone adds to their ticket pool.

When the owner of an unspent output creates a transaction to spend it, they must include, in each input, the most difficult pubkey available in the ticket pool for the corresponding output, and create a coinbase output with each mediator's payment address. These input/coinbase pairs always appear first in the transaction, with the same index values, and are sorted by difficulty in ascending order. The owner must sign each input independent of the others, in case any of them need to be replaced. After the owner publishes the transaction, the mediators confirm it by signing the inputs that contain their pubkeys. All but the last mediator signs their input independently. The last mediator signs everything. To minimize network traffic, inputs are signed in the order of their index values.

There's no limit to the number of tickets per unspent output, but an input can only have one unique confirmation pubkey, so pubkeys can't be used more than once, and cannot coexist in multiple tickets at once. There's no limit to the difficulty of a confirmation pubkey, so mediators can constantly work to upgrade their tickets with more difficult pubkeys. The effort required to upgrade a pubkey increases exponentially with it's difficulty, so the coinbase reward should scale exponentially as the zero count increases linearly. After an output is spent, all unused pubkeys can be presented in new tickets for other unspent outputs, so the work isn't wasted.

In the absence of a consensus protocol, a p2p network has no method of resolving double spend attacks consistently, by always accepting the same transaction. A simple decision is all the network requires. A miner performs this function by choosing which transaction to include in their block, while the mediator chooses which transaction to sign. The question is whether a signature is as secure as a block hash. Just as a block hash is a checksum on the block and every block before it. the last signature in a transaction is a checksum on the transaction, and every input is chained to a previous transaction. But a signature is still no iron-clad guarantee of the absence of a double-spend transaction, and neither is a block hash. Just as a malicious mediator can sign more than one input at a time, a malicious minor can solve more than one block at a time. With a good rule set that covers all possible outcomes, a network using either protocol can mitigate the misbehavior, but there is always a risk that a payee can be fooled. The only real difference between a signature and a block hash is the speed. A signature takes milliseconds, and only delays a single transaction, while a block hash can take hours, and delays everything.

Usually, all a mediator must do is sign the first input they see that contains their pubkey. If they sign more than one input for the same output, their ticket is destroyed because it can never happen by accident. If a mediator fails to sign their input within a reasonable time, a few seconds perhaps, the owner can simply replace the ticket for that input. If an attacker uses different tickets in each double-spend transaction, the pubkey with the lowest difficulty is discarded. This scenario might also happen unintentionally when a ticket is replaced. The more inputs a transaction has, the longer it takes to get confirmed because the more it has to travel around the network. If multiple transactions contain multiple double-spends, each with different tickets, the last mediator's should see all transactions before they have the opportunity to sign one, and they should sign only if the transaction that contains their pubkey also has the most combined difficulty. If a mediator sees a transaction that contains a weaker pubkey for the same output they have a stronger pubkey for, it serves as advanced warning.
Post
Topic
Board Development & Technical Discussion
Topic OP
Does the block hash function need to be cryptographically secure?
by
SapphireSpire
on 15/01/2024, 19:18:54 UTC
Cryptographically secure hash functions are irreversible so that ciphertexts can't be decrypted by running the function in reverse. But none of the data in a block is secret, so the block hash is just a checksum, and shouldn't need to be cryptographically secure. Am I wrong?
Post
Topic
Board Development & Technical Discussion
Re: Everybody's Doing POW the Wrong Way
by
SapphireSpire
on 02/01/2024, 22:46:15 UTC
Everyone measures efficiency based on their priorities. Miners and developers have opposite priorities. The miner's priority is to maximize the number of hashes per second per watt their machines can produce, while the developer's priority should be to minimize the number of hashes per second their function can produce on any machine.
Post
Topic
Board Development & Technical Discussion
Re: Everybody's Doing POW the Wrong Way
by
SapphireSpire
on 02/01/2024, 14:05:58 UTC
If you invent a hash function that is two times more computationally expensive, then you will simply drop the hash rate by half.
Speed is the goal of the miner, while lag is the goal of the developer. The idea is for the hash function to be a lot less computationally expensive. Anything that keeps the processor paused or preoccupied with low-power, time-consuming overhead, like context switching, interrupt requests, memory access, searching arrays etc. But the overhead must be required for valid hash outcomes, so it can't be skipped or optimized.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Topic OP
Are we correctly measuring energy efficiency in proof-of-work?
by
SapphireSpire
on 31/12/2023, 19:42:16 UTC
⭐ Merited by garlonicon (1)
An original blockchain is necessarily the oldest. As long as a majority of work is dedicated to adding time to an original blockchain, it will always appear older than any competing counterfeit blockchain.

Work is energy over time, therefore proof-of-work is proof of time. Blocks are produced with work to represent time, and are added to a blockchain to increase its apparent age. The work required to produce a block must scale with the work capacity to maintain the block interval, so it's not about how fast the work is done, but about the time it takes to do it.

Energy efficiency in proof-of-work is measured, not by how many hashes can be done per second per watt, but by how few. The slower the hash function is, i.e. the more interrupts and memory accesses it involves that keeps the processor suspended and maximizes the delays between hashes, the more time and less energy it will consume. The idea is not to make the processor work harder between hashes, but to work less, if at all.
Post
Topic
Board Development & Technical Discussion
Re: Energy Consumption
by
SapphireSpire
on 24/12/2023, 23:08:32 UTC
I've done a lot to fix bugs and minimize complexity since I first posted this idea. Most of the comments are confusing because they're old and outdated.