If there was a confusion, wasn't this thing designed to say so, and ask questions?
No, as far as I know it wasn't designed to act this way. In general, a machine learning model (which doesn't have _anything_ to do with AI - as in "intelligence") is just an extremely complicated math formula to "convert" input to output. Language model most likely applies some filters to make snippets of text fit together better (like diffusion models in images generation) and follow grammar rules of the language. The model itself has no idea what this text is even about, save for "doubting" its correctness. It might be improved with a multi-pass model, that will independently validate the output and ask for corrections to mimic that behavior, but that will make it just a more complex math formula (that in turn might even yield quality decrease and this is why such an obvious solution isn't used yet), not a critically thinking entity. But to my knowledge under the hood it just googles some keywords [apparently not real time, those texts are stored in a pregenerated database in a model-ready format], grabs related text to the keywords, stick it together in a human-readable way, there you go. "Pascal" gives search results for people with that name, pressure units, programming language, crater on the Moon, French submarine name, quotes from 100% Pascal-sensei anime and many more. Not to mention that search engine may also account for frequent typos and return results for things like Paschal lamb. It grabs all those texts, pre-filter, apply neural model, post-filter, pre-filter, apply neural model, post-filter... until the result fits a few formal criteria or other designed patterns. Done.