A lot of people still don't really get what large language models like Meta AI can actually do these days. These LLMs are trained using large stacks of text and code, but they can still mess up or say stuff that's not quite right, 'specially with complicated topics. Like you saw, Meta AI probably just linked you to where it got its info from showing it has limits. It cant check out or break down info it finds on its own. I'd say it's best for communication and summing stuff up, but AIs are nowhere near perfect when it comes to facts.