|
Size: 2901
Comment:
|
Size: 2893
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 8: | Line 8: |
| A. No true FTL; there is no hard scifi FTL drive that seems plausible. B. Wormholes mostly solve this, but artificial wormholes are probably something that only can be made by extremely dramatically-boring civilizations for whom all problems not like "the heat death of the universe" are trivial. C. Naturally occurring wormholes are not likely to be navigable or go anywhere interesting to people. D. Preferred solution, then, is that the wormhole network was created by a now non-interacting "dramatically boring civilization" for their own inscrutable reasons. |
* No true FTL; there is no hard scifi FTL drive that seems plausible. * Wormholes mostly solve this, but artificial wormholes are probably something that only can be made by extremely dramatically-boring civilizations for whom all problems not like "the heat death of the universe" are trivial. * Naturally occurring wormholes are not likely to be navigable or go anywhere interesting to people. * Preferred solution, then, is that the wormhole network was created by a now non-interacting "dramatically boring civilization" for their own inscrutable reasons. |
| Line 14: | Line 14: |
| A. In the 2020s, the most likely mid-term technological scenarios favor a transition away from organic life as the species that would be sent to explore the universe. Humans would follow far behind a friendly AI colonization effort preparing the way for them, or be absent entirely because of unfriendly AI. B. If the current AI Spring proves to be a false start on human-level AI (for example, because stochastic gradient descent is too training-data inefficient), this probably still only buys a century or so before brute force approaches like artificial evolution become viable. If artificial evolution isn't viable, then human consciousness is probably an extremely rare fluke in naturally-occurring life as well, which is unfavorable for a setting with non-human sophonts. C. A reactionary movement rejecting AI, in the presence of a false-start 2020s AI Spring that leads to AI which threatens mass social upheaval but not superhuman intelligence, could suppress AI development for a long time only in the presence of a general technological contraction. Otherwise, it will probably be too easy for a rogue actor to develop AI anyway, given the potential rewards. D. An interstellar civilization arising from the scenario in (C) is likely to be crushed by the first rival which has/is strong AI that it encounters. E. The civilization in (C) would need independent reasons to reject pursuing human mind-uploading. This is plausible but a civilization that would categorically reject human mind uploading is probably highly regressive socially. If they don't reject it, why wouldn't uploads be doing the highly-dangerous galaxy exploration? |
* In the 2020s, the most likely mid-term technological scenarios favor a transition away from organic life as the species that would be sent to explore the universe. Humans would follow far behind a friendly AI colonization effort preparing the way for them, or be absent entirely because of unfriendly AI. * If the current AI Spring proves to be a false start on human-level AI (for example, because stochastic gradient descent is too training-data inefficient), this probably still only buys a century or so before brute force approaches like artificial evolution become viable. If artificial evolution isn't viable, then human consciousness is probably an extremely rare fluke in naturally-occurring life as well, which is unfavorable for a setting with non-human sophonts. * A reactionary movement rejecting AI, in the presence of a false-start 2020s AI Spring that leads to AI which threatens mass social upheaval but not superhuman intelligence, could suppress AI development for a long time only in the presence of a general technological contraction. Otherwise, it will probably be too easy for a rogue actor to develop AI anyway, given the potential rewards. * An interstellar civilization arising from the scenario above is likely to be crushed by the first rival which has/is strong AI that it encounters. * The civilization in above would need independent reasons to reject pursuing human mind-uploading. This is plausible but a civilization that would categorically reject human mind uploading is probably highly regressive socially. If they don't reject it, why wouldn't uploads be doing the highly-dangerous galaxy exploration? |
Connected Space is a spiritual prequel to Beyond the Twelve Worlds, taking some of its concepts (particular, backstory concepts) and using them to develop a hard sci-fi "space opera" with appropriate setup to avoid adventure-destroying technologies such as characters who can be backed up (either because they are natively digital intelligences or because they are digitized human minds.)
Basic problems with a hard sci-fi space opera:
- The universe is too big to have multi-system civilizations.
- No true FTL; there is no hard scifi FTL drive that seems plausible.
- Wormholes mostly solve this, but artificial wormholes are probably something that only can be made by extremely dramatically-boring civilizations for whom all problems not like "the heat death of the universe" are trivial.
- Naturally occurring wormholes are not likely to be navigable or go anywhere interesting to people.
- Preferred solution, then, is that the wormhole network was created by a now non-interacting "dramatically boring civilization" for their own inscrutable reasons.
- Individual biological life will probably become irrelevant before a large multi-system could arise.
- In the 2020s, the most likely mid-term technological scenarios favor a transition away from organic life as the species that would be sent to explore the universe. Humans would follow far behind a friendly AI colonization effort preparing the way for them, or be absent entirely because of unfriendly AI.
- If the current AI Spring proves to be a false start on human-level AI (for example, because stochastic gradient descent is too training-data inefficient), this probably still only buys a century or so before brute force approaches like artificial evolution become viable. If artificial evolution isn't viable, then human consciousness is probably an extremely rare fluke in naturally-occurring life as well, which is unfavorable for a setting with non-human sophonts.
- A reactionary movement rejecting AI, in the presence of a false-start 2020s AI Spring that leads to AI which threatens mass social upheaval but not superhuman intelligence, could suppress AI development for a long time only in the presence of a general technological contraction. Otherwise, it will probably be too easy for a rogue actor to develop AI anyway, given the potential rewards.
- An interstellar civilization arising from the scenario above is likely to be crushed by the first rival which has/is strong AI that it encounters.
- The civilization in above would need independent reasons to reject pursuing human mind-uploading. This is plausible but a civilization that would categorically reject human mind uploading is probably highly regressive socially. If they don't reject it, why wouldn't uploads be doing the highly-dangerous galaxy exploration?
