.... and now let's move off topic with AI-generated compilers 
That is on topic, since AI is a human concept.
Let's pull this back firmly onto topic.
If a so-called AI ("Spicy pattern matching" as somebody called it) contributes code to a project, who takes responsibility for it?
Specifically, if somebody signs off that code as reliable (or even trusted, i.e. embeds unreliable concepts but behaves reliably), can he justify his actions?
There's a lot of crap written these days about "corporations being people" and so on. But it's important to appreciate that that's normally done in the context of things like political donations: the bottom line is that if something bad happens in an engineering project (or a financial institution etc.) there's almost always a detailed investigation to find out who- i.e. what man or woman- was responsible, even if ultimately it's decided that they weren't malicious.
No such investigation can be performed for current AIs, since they cannot explain their reasoning and since by the time the damage has been detected their reasoning might no longer be reproducible.
MarkMLl