Checking the hardware is viable only with sophisticated lab equipment. To check the software, someone whould have to carefully check the source code (at every release) for malicious backdoors or weaknesses, and then the client would have to check that the compiled firmware that he is loading, duly signed by the manufacturer, matches that source code. Obviously neither is viable in practice, except after the fact.
The hardware can be checked by feeding it known inputs and checking that the output matches what's expected.
It is easier to find the private key of a bitcoin address by trial and error than to check all possible inputs of such a device. (Translation, just to avoid misunderstandings: it is
totally inviable.)
Their build process is deterministic, so you can in fact check that the signed binary matches the open source code. It is also not true that every individual has to check the code every time there is a release, it can be done on an ongoing basis by a community of semi-trusted individuals.
Each client will have to download and install a copy of the firmware at every update, so each client would have to check that
his copy matches the copy that the community has verified by compiling the source code. That can be done by comparing the hashes of the firmware only; but how will the client get the correct hash to compare to, and how will he compute the hash of the downloaded copy,
on an untrusted machine (which is the assumption that justifies using a Trezor)?
You're really reaching, aren't you? What's your angle here exactly?
I am merely pointing out a fact that should be obvious to anyone who really tries to evaluate the security of the system.
Just because something is "bitcoin" it does not mean that it is perfect. While trusting a Trezor is certainly better than trusting a random PC or smartphone, clients still must trust the manufacturers (their honesty, and their zeal in keeping intruders off the manufacturing and shipping process).