Search content
Sort by

Showing 9 of 9 results by datrus
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 24/02/2025, 02:23:27 UTC
No, here is an explanation of this part of code:

This code is from original bitcoin: https://github.com/bitcoin/bitcoin/blob/master/src/rpc/mining.cpp.
Code:
while (max_tries > 0 && block.nNonce < std::numeric_limits<uint32_t>::max() && !CheckProofOfWork(block.GetHash(), block.nBits, chainman.GetConsensus()) && !chainman.m_interrupt) {
        ++block.nNonce;
        --max_tries;
}
(what it does is iterate on once to find a nonce that satisfies the PoW, this takes exponential time obviously since it's brute forcing on nonce space)


This is the code in my fork (robocoin) that replaces the PoW:
Code:
uint256 seed = block.GetHash();
    TensHashContext* ctx = tens_hash_init(seed.begin());
    if (!ctx) {
        return false;
    }

    while (max_tries > 0 && block.nNonce < std::numeric_limits<uint32_t>::max() &&
           !CheckProofOfWork(block.GetPoWHashPrecomputed(ctx), block.nBits, chainman.GetConsensus()) &&
           !chainman.m_interrupt) {
        ++block.nNonce;
        --max_tries;
    }
The only difference in that part of code is that it calls tens_hash_init before the loop, because this is needed to initialize the random ternary matrices used during the PoW. (would not be efficient to allocate and initialize them in the PoW loop).
This is also exponential same as in btc, bc also bruteforcing nonces in same way. Difference is that the forward pass is neural network inference (rounds of ternary matmults), instead of rounds of sha.
Cpu implementation of the hash used in pow loop is here: https://github.com/nf-dj/robocoin/blob/main/src/crypto/tens_pow/tens_hash.cpp

Also note that this part is just to make bitcoind able to mine (so can mine with bitcoin-cli etc), but this miner implementation is not efficient because only using cpu.
Also implemented some optimized miners using gpu and npu here:

https://github.com/nf-dj/robocoin/blob/main/test_pow/pow_coreml.py
https://github.com/nf-dj/robocoin/blob/main/test_pow/pow_pytorch.py

These implementations use coreml and pytorch respectively to mine much faster (since bottleneck is neural net inference) and connect to node using rpc.
The coreml version is needed to make use of ANE (apple neural engine) on macs, since pytorch only supports gpu/metal.

Hope this helps to clarify why the code in GenerateBlock is exponential, just as it is in the btc case.
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 23/02/2025, 11:02:48 UTC
What is new is that it forces optimized miners to be good a neural net inference. (some reasons to do so are explained here: https://github.com/nf-dj/robocoin)
To do so the hashing function is replaced by using rounds of ternary matmuls, each round being NP hard to reverse (https://github.com/nf-dj/robocoin/blob/main/tens_pow.md).
In the future imagine robots with powerful NPUs mining during their sleep haha. (would have no chance mining sha256d or memory hard pows, with this pow optimizing miner == optimizing nn inference)
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 23/02/2025, 06:52:24 UTC
Validation correspond to a single inference pass in neural network terms. (which is fast)
Mining is much harder and involves millions or more of inferences passes (depending on difficulty).
Reason is that inference pass can't be easily reversed (mining problem corresponds to a ILP, integer linear programming problem).
More technical details about the pow is explained on this page: https://github.com/nf-dj/robocoin/blob/main/tens_pow.md.

This pow doesn't imply there is more data on the blockchain (there is no more vast amount of data on chain than for btc).
Reason is because the matrix weights used for mining are derived from the block header using chacha20.
(so only block header is on chain, like for btc, not the full matrix weights used for inference).
Let me know if need more clarification, thanks.
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 09/02/2025, 16:36:13 UTC
Agree that utilizing AI to mine PoW of existing crypto currencies doesn't make any sense.
(for ex sha256 used in btc is designed so that it's computationally irreducible and so can't use ai to find a nonce faster than a miner that is just randomly trying out nonces).

But the thing I am experimenting is not using AI to mine existing PoWs.
It's to design a new PoW so that the forward pass (the hash, equivalent to sha256 in btc) is similar to a forward pass (inference) in a deep neural network.
Afaik not obvious that this is not technically feasible (let me know why if that's wrong).
The point of making the forward pass similar to inference in a neural net is so that miners that are optimized for that pow are also optimized at dnn inference.
Afaik such a pass can be deterministic, not approximate or probabilistic.
Mining is then finding inputs to the neural net such that the output satisfies some target difficulty.
Afaik, it should be possible to make the layers of the net so that it satisfies the same properties as a hash (evenly spread out etc).
In fact many existing hashes and ciphers are also based on rounds of diffusion (can be linear and permutations) and confusion (for ex s-boxes).

At the moment the way it works is model weights are generated randomly from the block header (256x256*64 weights). And then miners can search for nonces such that the output of the network satisfies target (for example by building a pytorch model initialized with random weights derived from the block header and then doing batched inference in the mining loop until find the right output).
The non-linearities (activation layers) between rounds are such that rounds are not easily reversible, evenly spread out etc. (same properties as a one way hash)

So the PoW doesn't involve any approximations or probalistic solutions, but just in experimenting with new pow such that computing the forward pass (hash) involves the same kind of operations as those needed for deep learning inference (so that optimized miners are also useful for that task).
The hash used in that PoW won't be any better than sha256, but should have similar properties and difference is that would require miners to be good at work that looks like deep learning inference. (matmults, accessing weights etc).
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 09/02/2025, 06:07:29 UTC
Agree there are already plenty of incentives to produce more efficient and powerful ai hardware.
With this project, experimenting to design a pow that "forces" miners to also be capable of the same kind of "useful" computation. (and keeping the pow as simple as possible, not specific to particular type of gpu etc, so that future type of hardware good at matmul rounds will even be better at mining, for ex future photonic or analog chips etc specialized for this type of ai compute)

Afaik previously litecoin attempted to prevent asics by using scrypt and being memory bound, to try to keep mining more decentralized, but asics were still developed for it that also serve no other purpose than mining and this contributes to centralization.
Here, trying to make it so that if miners are optimized for this pow, they are necessarily also capable of useful "ai" computation, regardless if using custom asic developed for that pow or not.

In order to achieve this, the pow is computing rounds of (ternary) mat mults, and trying to adjust biases and non-linerarity such that computation is both hard to reverse (trying to ensure hash properties similar to LWE) and useful for ai.
By making it necessary for miners to also be capable of useful work (though not at same time as mining), maybe this can also help prevent decentralization of mining power (like LTC aimed to achieve).
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 05/02/2025, 13:05:24 UTC
Yes agree that mining hardware that is optimized for this pow won't replace completely nvidia gpus which are much more versatile etc.
Also the pow is kept deliberately very simple (conceptually), just deep rounds of ternary matmuls (same kind of computation used for inference of 1.58bit llms).
Also activation layers are different (ai usually uses multiples types like relu/gelu etc, here the pow is just using a mod 2 nonlinear layer).
Intention is to make the type of computation similar enough that optimized miners would be ai-capable and vice versa. (for ex non-gpu hardware like groq would be very efficient at mining this pow as well, due to the type of computation it's good for: heavy matmuls, high bandwidth memory)
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 05/02/2025, 02:19:41 UTC
"The idea behind POW consensus in Bitcoin and similar systems is that if you expend energy to create a block and that block doesn't end up in the eventual consensus chain (because you were mining off a fork or making a consensus invalid blocks-- e.g. attacking) then the energy (and the cost of that energy) is wasted."

=> think I understand this point, however with the pow i'm experimenting, miners don't produce useful work while mining. however, the pow implies that miners optimized for mining (chips etc) are also optimized for deep learning workloads. i mean, at any given time they have to choose between mining and doing useful (ai) work, the same energy is not used for both. imo, think might still be useful, bc incentivizes development of chips, memory etc for mining that can also have some other use (for ex when the chip gets replaced by newer generation and is no longer profitable for mining). so what i mean is that if miners decide to mine using that pow, the energy will still be "wasted" and so doesn't hinder consensus afaik (let me know if i'm wrong). distinction is that usage of that pow could potentially help development and dissemination of chips that can also be used for ai etc.

not related to previous point, but technically the pow is implemented using multiple of rounds of mat mults (using ternary weights for simplicity, same kind of computation used in 1.58bit llms for ex) and noise derived from nonces and security relies on the LWE problem.
Post
Topic
Board Development & Technical Discussion
Re: Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 04/02/2025, 10:18:15 UTC
sorry, github blocked the account just now. guess they think trying to pump a scam coin or smth like that.
I reuploaded to this different account:
https://github.com/nf-dj/tenscoin

Yes I researched proof of useful work before.
Here it's a bit different and simpler imo, just trying to make a proof of work based on the LWE problem. (meaning A.x+e is hard to invert, not aware of any similar pow).
Trying to design the pow such that if miners are optimized for that pow, they are also optimized to type of computation needed for AI workloads. (rounds of matmults+nonlinerarity)
So mining doesn't produce useful work in itself. It's just that if someone optimizes the hw and infra to mine this pow, it aligns with the same type of computation needed for AI (bc deep learning etc is based on same type of deep rounds of matmults).
Meaning the same hw/infra can be reused (but not at the same time as mining, bc the pow uses random matrices). Imo still a benefit compared to all the effort and energy going into sha256 mining, but any feedback welcome.
Post
Topic
Board Development & Technical Discussion
Merits 3 from 1 user
Topic OP
Exploring Tensor-Based Proof-of-Work: Aligning Mining with AI/ML Workloads
by
datrus
on 03/02/2025, 10:23:07 UTC
⭐ Merited by ABCbits (3)
I’m experimenting with a proof-of-work design that leverages tensor operations. The goal is to create a PoW algorithm where mining hardware could also be effective for AI/ML tasks. In theory, this could allow mining equipment to be repurposed for AI workloads when not mining—and might help decentralize AI compute resources.

I’m particularly interested in feedback on:

The technical feasibility of designing a PoW that benefits from tensor operations
Potential challenges in aligning performance between mining and AI/ML tasks
Any ideas on how to further ensure that hardware used for mining has real utility outside of cryptocurrency mining
I’ve published some early-stage code and documentation on GitHub here. I’d really appreciate any constructive feedback or thoughts on the approach.

Thanks in advance!