Idealism and AI
In the not so-long-ago (i.e. 10 years), it was common knowledge in AI circles that agents needed to be embedded within the world to be able to do anything useful. This proved to be harder than expected; it’s really hard to train robots to do anything, as there is a long tail of unexpected events that makes mincemeat of any concept of “abundant IID” data, and hence violates most ML algorithms basic data needs.
As a response, we started focusing extensively on text and other forms of virtual environments, with the end result looking more and more like agents with a veneer of intelligence that can spit out endless disconnected-from-the-world verbiage. To make matters worse, the complete capture of the west by finance capital makes these forms of disembodied acting and being the only ones that carry any value – it’s all media (i.e. propaganda) and markets (i.e. betting) anyway. Real work is done far away.
This level of idealism might prove incompatible with any version of the good life really soon.