That sounds interesting, only that when an AI is trained with a human and with human logic the AI learns much more and faster, so all those teachings can be quickly digested by the AI, and if so the AI thinks much faster that the human being and once the AI has learned human logic is much better, I think that for an AI to learn something like this it takes hours, or the time that the human being teaches the AI, there it depends on the human being that I try to teach everything I can to the AI, of course in a very logical way and only in this way will the AI be able to overcome human logic and abstract thinking considering the risks and errors of the human being caused by emotions.
It seems to me that quite interesting is just the question of the correlation between the iron logic of AI and the manifestation of some kind of emotional reactions by AI itself. If the person teaching AI is emotional, then partly emotional reactions and decisions can of course begin to be reproduced in decisions that AI prints on its own without human participation. However, I think that the emotional component in AI decisions will still be tried to be minimized by its teachers. And then AI will become completely uninteresting, although all its decisions and recommendations will be strictly logical and corresponding to the optimal solution based on the array of initial data that AI has at the moment. But the dataset may not be complete and may not even include any element of information critical to the alternative solution. In such a situation, the emotional component in the final decision could just help. But I'm not sure that this is the vector of programming for AI that is supported by the majority of the scientific and technical community, which is now engaged in the further improvement of AI.