Date of publication: 2017-08-30 17:32
Most humans have no qualms about shutting down and rewriting programs that don't work as intended, but many do strongly object to killing people with disabilities and designing better-performing babies. Where to draw a line between these cases is a tough question, but as AGIs become more animal-like, there may be increasing moral outrage at shutting them down and tinkering with them willy nilly.
If we view the exponential growth of computation in its proper perspective as one example of the pervasiveness of the exponential growth of information based technology, that is, as one example of many of the law of accelerating returns, then we can confidently predict its continuation.
Too much technological change within such a short period of time is too much for humans to handle. Humans must learn how to adapt to the societal changes that this new technology will bring, so that humans won 8767 t end civilization as we know it. We as a species must strive for peace.
In the above diagram (courtesy of Scientific American), we can see that SETI has already thoroughly searched all star systems within 65 7 light-years from Earth for alien civilizations capable (and willing) to transmit at a power of at least 65 75 watts, a so-called Type II civilization (and all star systems within 65 6 light-years for transmission of at least 65 68 watts, and so on). No sign of intelligence has been found as of yet.
Bostrom doesn't present specific arguments for thinking that a few crucial insights may produce radical jumps. He suggests that we might not notice a system's improvements until it passes a threshold, but this seems absurd, because at least the AI developers would need to be intimately acquainted with the AI's performance. While not strictly accurate, there's a slogan: "You can't improve what you can't measure." Maybe the AI's progress wouldn't make world headlines, but the academic/industrial community would be well aware of nontrivial breakthroughs, and the AI developers would live and breathe performance numbers.
Take care in selecting your thesis. This is really a type of persuasive essay, but you don't want to be stuck either just repeating someone else's opinion, or citing all the same sources. Try to come up with an original thesis or take an aspect of someone's thesis and develop it. You can also take a thesis and "transplant" it into different circumstances. For example, use tools of modern economics to argue about the role of medieval guilds in the development of early European settlements. Or take a study done on children in France and try to show it is/isn't applicable to elderly Florida residents. An original thesis is the best start you can make to get a high grade in a research essay.
While many people have had ideas about a global brain, they have tended to suppose that this can be improved or altered by humans according to their will. Metaman can be seen as a development that directs humanity's will to its own ends, whether it likes it or not, through the operation of market forces.
That said, as noted previously , early work on AGI safety has the biggest payoff in scenarios where AGI takes off earlier and harder than people expected. If the marginal returns to additional safety research are many times higher in these "early AGI" scenarios, then it could still make sense to put some investment into them even if they seem very unlikely.
Suppose an AI wants to learn about the distribution of extraterrestrials in the universe. Could it do this successfully by simulating lots of potential planets and looking at what kinds of civilizations pop out at the end? Would there be shortcuts that would avoid the need to simulate lots of trajectories in detail?
Besides a single country taking over the world, the other possibility (perhaps more likely) is that AI is developed in a distributed fashion, either openly as is the case in academia today, or in secret by governments as is the case with other weapons of mass destruction.