More serious:
Just because chatGPT answers questions doesn't mean that it knows anything.
I'm not talking about "it might make a mistake" I'm talking about ... the way it constructs answers isn't based on knowing facts or any logical framework.
The only thing chatGPT cares about is that answers "sound like" they are correct and appropriate in context.
chatGPT is not an oracle, it's not even a search engine, it's not a wikipedia even... it's worse than all these. It's a verbiage engine.