People who say that AI isn't dangerous simply aren't in the know. Scientists even convened earlier this year to talk about toning down their research in artificial intelligence to protect humanity.
The short answer is: it can be. The long answer is: hopefully not.
Artificial intelligence is on the way and we will create it. We need to tread carefully with how we deal with it.
The right technique is to develop robots with singular purposes rather than fully autonomous robots that can do it all. Make a set of robots that chooses targets and another robot that does the shooting. Make one robot choose which person needs healing and another robot does the traveling and heals the person.
Separate the functionality of robots so we don't have T1000's roaming the streets.
That is Plan B, in my opinion. The best option is for human-cybernetics. Our scientists and engineers should focus on enhancing human capabilities rather than outsourcing decision making to artificial intelligence.
I think giving robots different roles is a good idea. If they truly had AI, I guess it wouldn't be that hard to imagine that they could learn to communicate with each other and plot something new. I don't think enhancing human capability should necessarily be a priority over robots. I think both should be developed. You could develop technology that would make it easier for a human to work in an assembly line. It's a somewhat useful tool, but it would be much better to just make a robot to replace the human. Humans shouldn't have to do mundane tasks, if they can create robots to do the same tasks.