I agree, LLMs have the amazingly human ability to bumble into the right answer even if they don’t know why.
It seems to me that a good analogy of our experience is a whole bunch of LLMs optimized for different tasks that have some other LLM scheduler/administrator for the lower level models that is consciousness. Might be more layers deep, but that’s my guess with no neurological or machine learning background.
I agree, LLMs have the amazingly human ability to bumble into the right answer even if they don’t know why.
It seems to me that a good analogy of our experience is a whole bunch of LLMs optimized for different tasks that have some other LLM scheduler/administrator for the lower level models that is consciousness. Might be more layers deep, but that’s my guess with no neurological or machine learning background.