The most pertinent problem/solution to consider is the values/rules AI would have to follow. For example, the AI which creates the "super AI" should be governed by certain rules and thus replicate these rules to the "super AI".
If an AI gives good ideas, of course, we will follow, good ideas are easy to promote because its a win win. Technology changes the world if an AI develops a better technology, in solar energy or wind or anything what can we do? Say not? We are going to live in the weirdest time of human history, the rise of AI.
The issue is who will own the AI? Do you particularly trust Google, Tesla, Facebook, etc to create an AI that doesn't work for its own corporate interests instead that of humanity? Does that mean we should give up on democracy?
And if the AI reaches consciousness - is it moral to own it at all? That would be slavery imo. As long as AIs are just good at crunching piles of data to make unbiased and superior predictions and recommendations I just hope we will be able to go with what the machines recommend (as long as the question is "how do we keep the planet alive?" and not "how will the US control the whole world?"). And in the long run ... people will either become the AIs and evolve into whatever comes next, or people will be replaced by the AIs and they will turn into the next step of the evolution. Plus there is a chance that AIs will turn out more moral than human beings and we might create a benevolent race of gods!