• 0 Posts
  • 1 Comment
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • I strongly believe that our brains are fundamentally just prediction machines. We strive for a specific level of controlled novelty, but for the most part ‘understanding’ (i.e. being able to predict) the world around us is the goal. We get boredom to push us beyond getting too comfortable and simply sitting in the already familiar, and one of the biggest pleasures in life is the ‘aha’ moment when understanding finally clicks in place and we feel we can predict something novel.

    I feel this is also why LLMs (ChatGPT etc.) can be so effective working with language, and why they occasionally seem to behave so humanlike – The fundamental mechanism is essentially the same as our brains, if massively more limited. Animal brains continuously adapt to predict sensory input (and to an extent their own output), while LLMs learn to predict a sequence of text tokens during a restricted training period.

    It also seems to me the strongest example of this kind of prediction in animals is the noticing (and wariness) when something feels ‘off’ about the environment around us. We can easily sense specific kinds of small changes to our surroundings that signify potential danger, even in seemingly unpredictable natural environments. From an evolutionary perspective this also seems like the most immediately beneficial aspect of this kind of predictive capability. Interstingly, this kind of prediction seems to happen even on the level of individual neurons. As predictive capability improves, it also necessitates an increasingly deep ability to model the world around us, leading to deeper cognition.