Grounding large language models

Posted on Jun 29, 2022

I’ve came across this paper: Mapping Language Models to Grounded Conceptual Spaces while searching for methods to address the problem of grounding large language models. Seems like the general approach is to feed the model data post-training. It’s interesting, as it implies that you can learn from purely mental constructs link them to reality later on.