There is no obvious reason that AI cannot eventually duplicate the abilities of the human brain, by brute force simulation if nothing else, but its unclear how far we are from being able to do that. The current versions of AI are nowhere close, but they have some pretty amazing capabilities and have advanced a lot in the last few years .
We don't know if there are theoretical limits of "intelligence" or whether humans are near that limit if it exists. Machine AI and human intelligence operate completely differently, so there is no particular reason to think that machine intelligence can not eventually become far more capable than human intelligence. If that happens its unlikely we will have the option of turning it off - since by definition it will be far smarter than we are, and able to convince us to protect it.
There is an argument that developing artificial hyper-intelligence IS a good goal for mankind, but that is a pretty big decision to make. It doesn't seem unreasonable to develop some guidelines before its too late for them to matter. In the end though, i think if hyper-intelligence is possible, it will be developed and the the place of humans in the world will change beyond recognition.
The key is that any decisions need to be made BEFORE we have AGI that is in a position to surpass human abilities, not after.