when a percentage of user data is used in development of an AI, then important and sensitive information are lost in the process and could be misused in the process, how does COVALENT plan to secure user data from being misused?
first of all, its going to be the user that gives permission for their data to be used for advancing AI knowledge and secondly i don;t think the team will make use of the sensitive part of the data for experimentation