What does it mean for the verifier implementation to be "correct"?
I meant that there could be implementation bugs as in any cryptographic software. And it will take a lot of work to harden it.
I trade off the ability to verify a completely valid blockchain for the assumption that your organization built a proper prover and verifier.
We have not really build a new verifier, but only apply existing open source tools. We use the Giza verifier, which is mostly the Winterfell STARK library. What we have added is a translation of that verifier to Cairo.
The best of it all is that you don't have to use it. It is fully optional. It can get rolled out for low value use cases first and grow over time into a hardened library that makes sense for high-value use cases.
I dont think the extent of my concern is getting across.
I can build a prover and verifier for generic data for example:
I want to prove I know x where x * secp256k1.G = (xX, xY) and I give you (xX, xY) as the zkproof.
I can only prove I know this by revealing x.
If now x is encoded bitcoin consensus data, the coordinates may very well look like any other coordinates in the proof (xX, xY) but if you then reveal that x is invalid consensus data now it is revealed that every block that is based on x is now invalid. So then if there were to be an attack coordinated against nodes not checking consensus data, they would need to constantly reveal x which breaks the ZK assumption which leaves a fully homomorphic proof which bitcoin already has. If all of this is understood how is it possibly optimized in any way to use external provers on top of the underlying full node proof system?