You're absolutely right. It's important to understand what LLMs are good at and how they work.
Basically an LLM is a statistical text prediction engine: Given this text, what text is likely to come next? So it's no wonder that it will fail miserably at multiplying.
However LLMs are really good at summarizing text and that is basically what the AI Chat plugin does. It summarizes given sources to answer a given question. In addition a by-product of LLMs is used: vector-based similarity search. This mechanism is very well suited to compare a given question with a bunch of documents and find the relevant ones (I explained that more in my post you linked above).
On your points in detail:
wikenigma 1) LLMs have no concept of copyright infringement.
Yes, in general, the copyright question is problematic. However in the use case of using an LLM to summarize/source your own content it is less relevant.
wikenigma there are already plenty of examples of LLM output laced with hidden malicious code
The biggest problem with code generation is not malicious code. It is that reading (aka. reviewing) code is harder than writing code. When the LLM writes code for you, you need to review it.
Either you are able to do so, then writing that code would have been actually easier for you.
Or you're not capable, then you're using unreviewed and most likely buggy code.
Anyway, that's not relevant for the AI Chat plugin
wikenigma Where did the LLM dataset come from?
I think it's an open secret that all LLMs are basically based on scraped Internet content with little to no regards to copyright. You might think that bad or you might argue that copyright is broken anyway. It makes companies profiting from LLMs questionable, but IMO does not diminish the usefulness of the tool itself.
Yes. They only predict text. Retrieval augmented generation, eg. seeding the LLM with context lowers the chance of hallucinations, but people still need to use their own brain when digesting the LLM output.
wikenigma track record of the multi-billionaire owner(s)
Musk is a fascist asshole. Tesla and Space-X still produce great technology. Would they be better off without Musk? Certainly! Should we compost billionaires? Probably.
I am just not sure what the argument is, you're trying to make.
wikenigma In short : LLMs [...] never to be trusted without a qualified human checking the results.
Absolutely!