It’s pretty funny that most researchers haven’t figured more of this out. The leaps we make to get progress are so simple. The adversarial stuff has basically been in play for years in the spam creation world for a long time. The convolutional networks just required faster servers.
The ideas are all fairly straight forward to get to the next level.
Solving the “can’t think of new creatures” is as easy as building up training data sets that encode basic forms and basic physics. Yes, literally if you get tons of animal anatomy books and tons of animator drawings that have relevant physics and general animal forms you will have no problem generating new creatures. Then using the adversarial networks etc to test the veracity of these new creatures. The key isn’t to generate a single image but a set of animations to make sure the physics in space hold up. This will require a ton more training data than these cat or dog generators but ultimately it’s exactly what’s going to be done. (Particularly it will be used to generate realistic vr inhabitants.)
I keep saying it that every image contains a mathematical theorem (physical reality is a subset of such theorems) Think that way and you can pretty much crack ai wide open as far as you can.
That said the limit of ai is still computational irreducibility/incompleteness/speed of light. And of course… once ai has all this generative capability it will do as humans do. Spend most of energy making up stories/patterns and then having to falsify them.