Still, the risk is significant and we don't know yet its scope. Even though the AI is programmed to do something beneficial, it may develop a destructive method for achieving its goal: This can happen whenever we fail to fully align the AIs goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.