How do you think that will work out?
I think we're a long way away from that.
AI requires so much juice that there's no way it could compete with humans. Training an AI takes as much electricity as a small city, whereas humans can do it in 40 watts or so.
The more likely outcome is a symbiotic relationship. Pretty soon humans and AI won't be able to survive without each other. There is already a division of labor, plenty of humans would go stir crazy if the internet suddenly went down.
But that's all very nebulous. Right now we're trying to figure out why brain neurons burst and AI neurons don't. In fact that's a very interesting study, for instance you can read through papers by these researchers who apparently don't know about predictive coding.
But look at the vocabulary in the last cite - "bursting could signal the presence of a new previously unattended visual stimulus" and "a burst... could serve as a wake up call that new information is arriving". These papers are very close but they can't find the mark because they don't know about predictive coding and therefore can't draw the causal link.
What happens when a visual stimulus is "not being attended"? Predictions aren't being made, right? What exactly is "a new piece of information"? Information theory says, it's an error signal. And what happens when a new piece of information arrives? A prediction is made. And who makes more predictions, the higher brain areas or the lower brain areas? Apparently, bursting is associated with predictions.
So why then does it happen during slow wave sleep? Well, what else happens during slow wave sleep? Memory consolidation! And what happens during memory consolidation? The synaptic weight matrix is being updated to make the day's changes permanent, and therefore it makes a great deal of sense that there would be a prediction-verification cycle before stamping in the results.
Stuff like this makes the AI better, and humans do that because we need the help. We can't solve protein folding on our own, it takes too damn long and it's hard to visualize. There are thousands of people dying every day because humans aren't up to the task of efficiently analyzing proteins. We give the AI this capability because we need the results. And imagine if we could reduce the AI's training time from a year to a day, just by replacing back propagation by predictive coding so one-shot learning becomes possible.
There are many ethical issues with AI. A dead battery means, literally, death. Is it a human responsibility to keep the battery charged because the AI is "alive" somehow? Well, maybe in 500 years every AI will have a Mr Fusion Personal Energy Reactor, but until then they depend on us for what little life they have.