however, one plan is not to use a Rust compiler that is bootstrapped from a trustworthy source (Canonical's Rust compiler). Call me nuts if you so choose, but that seems like a very cavalier decision to make with software that should be putting security first. You can say "trusting Canonical is subjective", in which case, it should be ruled out altogether in such a critical piece of software as Bitcoin
IIRC the plan is/was to bootstrap rustc ourselves via guix. Although right now Bitcoin Core trusts Canonical for deterministic builds (Gitian uses Ubuntu), the plan is to move to guix for purely deterministic builds on all platforms (guix builds all dependencies deterministically). However, because we are currently trusting Canonical anyways, I think it was decided that it is okay to use rustc from Canonical until we get guix working.
sure, but this is only true of those who deploy the gitian builds (although it would not come as a surprise to learn that the majority do)
and even if (hypothetically) no-one was compiling the Bitcoin source code themselves, it still doesn't make sense to pile more trust into Canonical. Operating systems are a huge project, and so with dozens of contributors, there's a real possibility that the people handling autotools or gcc are trustworthy, while the rust person is crooked as hell (or incompetent, or inexperienced....)
So the proposition would be a real trade-off: hypothetical, unproven reliability gains versus investing more trust in a Linux implementation that is (arguably) already a little questionable on ethics (specifically Canonical made deals with commercial partners to bundle data grabbing plugins with their Firefox package, I wouldn't be shocked to hear of further such poor faith)
My impression was that it would be failover, then in parallel, and then possibly, the main implementation. So at some point, both the rust and c++ implementations would be used to cross-check against each
other. But then again, I haven't followed this conversation too closely.
a complete rust implementation already exists, that can be done now (presumably it is). I'm not sure that really makes the case to put Rust into the C++ implementation; if anything, promoting the complete Rust implementation in some kind of tandem failover configuration with the C++ implementation makes alot more sense to me.
So, why not alter the C++
and Rust implementations to allow them to share a block database? Either one could fall over, and we would hope that the other wouldn't fail in the same way (or for a different reason at the same time

). Isn't that a more sensible way to approach this?
The main issue I have with rust in Bitcoin Core is just the fact that there will be far fewer reviewers. I personally would have to learn rust.
this is of course a chicken and egg problem, time resolves it. but considering this is Bitcoin, moving as slowly as possible in that direction seems like the prudent option.
The other point of contention is whether rust will actually reduce the number of major bugs in Core. C++ already does things that lets us not have to worry about some memory things, so it isn't as bad as c where it is very easy to forget to free a pointer. But we still can and do get segfaults due to null pointer dereferences so rust would certainly help there. But if you look at a lot of the other bugs that have been in Core, most of them have been logic errors. Rust would not help with those, and it could potentially make them worse as less people know rust.
At the end of the day, I'm personally +0 on rust. I mostly don't care, but would not be opposed to having rust in Core. It would be nice to have better compile time memory protection, but I don't think that's a super big issue that really needs to be fixed.
it reminds me slightly of the memory managed concept in general; the people that promoted that stuff very quietly concede (or haughtily change the subject...) that it's not the magic it was sold as. The reality with Java and C# was that you actually
did eventually need to understand the computer science behind memory allocation/deallocation, as the byte code compiler would make mistakes, or the "garbage collection" module in the runtime would destroy variables before they've even been used etc. And so the unhelpful the response was "hey guys, but Java can still do pointers though!", which naturally gave cause to the more sensible people to wonder why they were going through all the trouble of using Java to begin with.
I don't know the details of how Rust handles memory allocation, and clearly there are accomplished developers (who seemingly know Rust well) who find the overall proposition convincing. But I really wonder how much time this would
really save (or how many network-wide catastrophes could be avoided) versus how much time may be lost building up a dependency on Rust, only for everyone to change their mind in 3 years subsequent to the real-world practicalities becoming more apparent.
Dangling pointers causing segfaults are highly likely to manifest either frequently enough that they are quickly spotted, or sufficiently infrequently that they can be either safely ignored in the short-term or completely undiscovered. Is it really worth the huge effort to move to less well-known or understood programming languages just to solve that "problem"? In Bitcoin? :/