Can hallucinations made by large language models help in discovering new drugs?
A review of the paper: Hallucinations Can Improve Large Language Models in Drug Discovery by Yuan et al.
Can LLMs make trade-offs involving stipulated pain and pleasure states? - Keeling et al.
Can AI feel pain and pleasure? A study on the sentience of large language models.
Don’t Do RAG: When Cache-Augmented Generation is All You Need for Knowledge Tasks - Chan et al.
A better way of doing RAG? I don’t think so…
Sur la dynamique de l’électron - Poincaré
Around the same time as Einstein’s paper on relativity, Poincaré a similar paper…