Does the calculation of this transaction fee follow any consistent pattern or general rule? Could you perhaps illustrate it with a concrete example?
The fee you pay for a transaction equals the virtual size of the transaction (in vbytes) multiplied by the fee rate (in sat/vbyte).
The virtual size of the transaction depends on number of inputs and outputs. The more inputs/outputs you add to the transaction, the more fee you have to pay for it.
The transaction virtual size also depends on the input/output types. For example, a legacy input adds more to the transaction virtual size than a segwit input.
The fee rate is set by yourself.
You should set the fee rate depending on how congested the network is and how fast you want you transaction to be confirmed. The faster you want your transaction to be confirmed, the higher fee rate you should use for that.