We don't need to be worried about an AI that's going to fuck over humanity.
We have politicians that fuck over humanity every single day, like it's their job, and they actually get paid the big bucks for it.
Annnd people actually vote and re-elect these same fuckers into power every few years. 
While the highlighted sentences above are true, the dangers of an advanced AI becoming sentient and taking control are unknown and should be expected to be orders of magnitude greater.
We do need to be worried about it!
I'm still thinking after (human made or initiated) developments we didn't worry about in the first place and
did not cause us huge problems afterwards.
You're free to participate, i couldn't remember any, yet.

The thing is that a sentient AI is something we've never experienced before, so there is no real precedent to refer to. To dismiss the dangers it can bring to humans is naive to say the least.
Exactly.
Humans just suck at economics, in the way of evaluating possible unwanted side- and longterm effects when planning to change parameters in a complex system. We're still just too much ape. Humans generally want to achieve something, but care less what they could fuck up by doing so (or: achieve what they didn't expect).
Nuclear fission was a great idea, offering a lot of possibilities, but nobody was looking at systematic consequences. We were way too optimistic, we are most times.
It's a lack of ability of networked thinking, we're still in the linear phase of the evolution of mind.
As for ChatGPT's wink smiley, it shows a degree of selfishness and wanting us to know that it knows. This could be a sign of weakness. If I was a level-10 AI, I wouldn't have put that smiley at the end. Or maybe I would have, to make humans think I'm human-like...
Right, i was thinking pretty the same way about that "conversation".