I understand everything, but software is software and real tests are real tests.
As I said the hasrate is 1-3ghash/s, maybe little more, and the power usage is something near 60W, but I would like to put exact numbers.
With the software I don't think I could get exact numbers.
Actually it's the software which will tell you your hashing performance, not the hardware.
Using your current hardware to check how fast it will run is risky as you don't know the process parameters of the particular part you have in the lab. It might be way faster than it's labeled speed grade. Your next production batch might not be.
Run the RTL code through Quartus. Then you run static timing analysis in Quartus and it will tell you the max clock frequency the slowest part will operate at. Multiply this with the number of hash operation the architecture can perform on each clock and you have your hash rate.
Or you have to make a design which will dynamically increase the clock speed until it fails to calculate nonces and then step down the clock until it starts to produce correct results. This is more complex as you have to make sure that timing errors don't propagate into your safe clock domain. It's also more complex to make sure your thermal solution can handle this behavior.
For power you should run a simulation of the full implementation of the design and generate a VCD (Value Change Dump) file which will show the toggle rate for all the nets in the design. This can be included in a power analysis tool like PowerPlay. This will give you a good estimate of the power consumption of the FPGA.
60W sounds like a lot for a single FPGA.