Difference between Human Learning and AI Learning? One of the key differences often highlighted is that a human can learn and generalise new concepts from one or few examples whereas the most powerful deep learning AI neural nets often need to learn from thousands to tens of thousands (or even millions) of examples. This implies current mainstream deep learning methods are not exactly human-like in learning. Is it possible then to model human learning?
One hypothesis I was pondering is this: As humans, we often learn from trusted sources. For example, when we were young kids, we trust our parents when they tell us that “This is a cat!”. This trust appears to have a deep impact on our early day learning experiences. From that one single trusted example, we deeply register the cat in our brains and am able to generalise it fairly well to recognise cats in the future.
Yes, I understand learning from sparse data is a highly challenging research area but…
What if we could computationally model this experience and concept of trust with the AI such that:
- When “trust” is high between AI and the data source, the AI chooses to learn from sparse data (analogous to the human learning example above) and associates a very high level of weights with respect to that one or few training examples.
- The AI examines and extracts as much features as possible from the sparse data and generalises from there.
Questions… Does this necessitate new neural network models? Overfitting issues due to sparse data? Will sparse data allow for deep neural networks keeping in mind that shallow neural networks are generally weaker in feature representations? Autonomous data augmentation ability to generate meaningful permutations of the sparse data for effective features representation?
Note: My What-If Thoughts (W.I.Ts) posts are raw (and possibly wild) ideas and thoughts. Feel free to discuss and share your thoughts too!