So I've got a question. The telemetry HTTP interface lists the card type but the JSON-RPC does not - however, the card order in the miner does not match nvidia-smi or PCI ordering. How does the miner order the cards?
Thx for pointing this out.
ZM uses CUDA_DEVICE_ORDER. Nvidia specifies it's default behaviour like this:
FASTEST_FIRST causes CUDA to guess which device is fastest using a simple heuristic, and make that device 0, leaving the order of the rest of the devices unspecified.
The default behaviour can be changed by setting the environment variable 'CUDA_DEVICE_ORDER' to 'PCI_BUS_ID' which causes CUDA to order the devices by '... PCI bus ID in ascending order'.
The documentation for this is located at:
http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-varsIt'd be great if in the telemetry JSON-RPC you included the uuid for the card -- for larger deployments it'd let us identify specifically which card the miner was referring to. Or the PCI slot ID or anything.

That's a good point, I'll include both: PCI-Bus-ID and UUID.
If this information is already there or there's a logical method that the miner organizes the cards please let me know. For example, in one of my rigs I have the following
1060 PCI bus: 0:01:00.0
1060 PCI bus: 0:02:00.0
1070 PCI bus: 0:03:00.0
1070ti PCI bus: 0:05:00.0
1070ti PCI bus: 0:06:00.0
That's also how they are ordered in nvidia SMI and nvidia-settings (all linux obviously). However, the miner reports them as follows:
0 GeForce GTX 1070 Ti
1 GeForce GTX 1060
2 GeForce GTX 1060
3 GeForce GTX 1070
4 GeForce GTX 1070 Ti
The 1060 in pci bus 01 is the "active" interface carrying xorg for reference.