This is a cool way to put it, but I think even just errors and randomness in reproduction of source ideas sometimes can count as original ideas. Nevertheless, I also think it doesn’t fully encompass all range of mechanisms by which humans come up with original ideas.
I agree with this in terms of process, but not necessarily agree in terms of result. If you enumerate the state space of target domain, you might realize that all the constructions there can be achieved by randomly introducing errors or modifications to finite set of predefined constructions. Most AI models don’t really work like this from what I know (they don’t try to randomize inference or introduce errors on purpose), otherwise they could probably evade model collapse. But I don’t see why they can’t work like this. Humans do often work like this though. A lot of new genres and styles appear when people simply do something inspired by something else, but fail to reproduce it accurately, and when evaluating it they realize they like how it turned out and continue doing that thing and it evolves further by slight mutations. I’m not saying I want AI to do this, or that I like AI or anything, I’m just saying I think this is a real possibility.