In my design, we abandon "reference repository", "fetch","pull", etc., altogether. To be more specific, we push them behind the scene using an abstraction layer, keeping the legacy Git intact.
So we don't abandon them. We just stop being dependent on Github.com for their maintenance.
As I said, we push them behind the scene. Consider how a file system
hides the actual low level block I/O, applications read/write files as a continuous stream of bytes but behind the scene it is done with discrete blocks that are not guaranteed to be physically adjacent.
Decentralizing git (which isn't already accomplished with Gitea servers?) doesn't break the core idea of distributing and tracking software versions, which includes fetching and pulling requests.
Gitea has nothing to do with decentralization per se. It provides excellent, yet centralized Git self-hosting features as you have the reference repository concept and other related stuff as ordinary Git.
[/quote]
One thing I don't understand is how you do ensure for the integrity of the software.
It is both easy and hard. Trivially, one can always use its social knowledge (external data) to prune the invalid/unwanted forks (@cricktor has already mentioned it above), unfortunately it is not applicable to automated synchronization process I'm suggesting.
For the latter purpose, my scheme imposes a well-defined authorization metadata that PRs which that try to change it are considered forks as well as unauthorized PRs. For legitimate authorization update, the metadata is organized hierarchically, it is possible for a repository owner (with unique signature) to grant/revoke commit access to other contributors, etc.
On a p2p network, an attacker can replace the devs' keys with theirs.