jducoeur: (Default)

[personal profile] jducoeur 2025-07-11 05:32 pm (UTC)(link)

I strongly suspect you could build an LLM that was resilient against this sort of nonsense -- there's nothing obviously sacred saying that the base model, the input, and the question being asked about the input have to be treated equally.

But of course, that would require that the people and companies using these LLMs have some clue what they're doing, and that's not the world we are living in...