So, does this mean that we have a situation where chatGPT was caught both plagiarizing and lying about its claim?

I don't think its lying so much as it just happens to be wrong a good portion of the time. For example, I caught this the other day -- I'm personally certain this text was AI generated although cannot "prove" that it was:
Many cryptocurrencies, such as Bitcoin, are based on decentralized blockchain technology, which means that transactions can still be processed and recorded even if the internet is not available. While it's true that accessing cryptocurrency wallets and conducting transactions would be more difficult without internet access, it's possible that alternative methods of transmitting data could be developed in times of war.

I know that is just anecdotal but Stack Overflow has actually
banned the use of ChatGPT on its site because it was generating too many wrong answers:
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.
The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.
I think the same rationale applies to this forum and if they're doing it, we should also pursue a set of criteria to establish what constitutes an "AI-generated post."