Imagine learning about the world only through glimpses of shadows. In many ways, this is precisely what we already do., facing a blank wall with a fire blazing behind them. The prisoners could only perceive their reality through the illuminated shadows cast before them—only offered small hints of their surroundings through the undefined shapes of various objects being passed in front of the fire.
In a modern context, we can visualize the limitations of AI models in a similar vein to those faced by Plato's prisoners. AI looks only at the data we give it; it tries to make sense of shadows while simultaneously evaluating possibilities, connections, patterns or trends to make the best available predictions. It essentially sits within its own data cave.
The data that we need to train models on can also be incomplete, skewed, biased or purposely poisoned along the way. When this happens, models produce what humans interpret as nonsense. The models themselves can’t see their mistakes through the eyes of human beings because they only see what they are fed. Just as Plato’s shadows limited the view of the prisoners, we limit the universe to our AI models.