the algorithms produce their own algorithms to then produce the results I'm trying to make them achieve. But at the end of this training, I, "the human", does not understand how the AI decided to produce these final algorithms.
Just a quick comment on this because I don't want to derail the thread. The evolution of AlphaGo I think demonstrates how quickly things are moving. A few years ago everyone was saying a machine could never beat the world's best Go players. Then (2015?) AlphaGo was developed and trained initially by humans through the input of a vast number of previous games. Learning algorithms subsequently built on this, but there was a big human input, guided by Go experts, and a lot of reliance on just brute-forcing the calculations. In 2016 it beat the world's best human player quite comprehensively.
The 2017 follow-up was AlphaGo Zero. This time they just fed it the rules and nothing else. They got it to teach itself. Within a short time they put it up against the original AlphaGo, and the entirely self-taught version won 100-0. It is indeed reaching the stage where computers aren't just better than humans at calculating, they're also better at learning how to calculate, and at learning how to learn. There is some exciting (scary?) emergent behaviour coming out of this.
It is this very interesting emergent behavior that I find curious. One can write code to enable these artificial networks to learn on their own, even though it is very tricky to do so. A slight change in the parameters for the network's environment and we end up with large instabilities in the network's internal architecture, which produce garbage. These systems are difficult to stabilize, but once the right parameters are found the networks can produce solutions on their own.
And so, some seem so worried about the fast and powerful quantum computers, but maybe some should actually be worried about an AI building its own algorithm to find a private key, and us humans are left not understanding how it did it.
hehe, of course, I know full well that AIs are still too primitive for any such silly notions. And I ignore to comment on those that mention "the singularity" since its just nonsensical fantasy.