The conventional wisdom would be that if you have high intellect you would be more adaptive to a wide range of environments. Homo Sapiens has evolved to possess a highly developed intelligence, and it surely correlates with the fact that humans have come to dominate in a wide range of environments, from the tropics to the north and south poles on the earth, and to International Space Station and further beyond, perhaps even to Mars.
However, although intelligence has surely helped humans to be more robustly adaptive in a wide range of environments, it has also made the human existence less robust and stable. The possibility of human extinction through total nuclear war is just one example.
It could be argued therefore that high intelligence is a double-edged sword. On the one hand it can help make the system more robust. On the other there would be increased vulnerabilities, easily scaling out of the comfort zone.
It is an interesting question whether incorporating artificial emotion or consciousness in a system would make it more or less robust. Memorably, Eliezer Yudkowsky remarked in a recent Lex Fridman podcast that endowing an AI with emotion would be terrible. Artificial consciousness might make a AI more stable, by the incorporation of metacognitive processes, realizing the veto function, which is indispensable in human ethics.
The jury is still out.
In a recent episode of Ken Mogi's Street Brain Radio I discussed these pressing issues in some detail.
High intelligence, artificial or natural, becomes unstable. Can consciousness help that?