Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3 4

Author Topic: Solutions to the Fermi Paradox  (Read 1889 times)

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #15 on: August 29, 2022, 08:59:37 am »

In my hard SF setting, the solution is "civilizations kill themselves and each other some thousands of years after attaining interstellar travel, and the new cycle has just started". Which is, of course, a narrative contrivance due to FTL being a thing in the setting while we don't have any evidence for it being feasible to attain (even via Alcubierrie drives etc). But this conversation reminded me of that.
<snip>
It's a little more involved than that and there's a justification, I'll explain it better when I make a thread for Project Stardust, but yeah as I said it's a little contrived because I want to write about astropolitics and not humanity getting killed or adopted by StarGods. I just feel it makes for a more interesting setting if everyone is on roughly equal footing, you know?

Cryptfiend, you ask why these ridiculous edge cases would exist. There's your answer. It just seems that without ridiculous edge cases the default answer is "stomped by AI." You ask why it would be a threat: even if you build your own to protect against someone else's, you're by definition building it "blind". You can't know what way it should develop itself at the level of the universe and intelligence it's operating at; when something moving individual quarks around like gears, for example, the builder species has no clue what it "should" do. It's all the AI; and it's method is not guaranteed to be perfect.

I'm sorry, I can't parse what you're actually saying here. What ridiculous edge cases are you talking about? And for the rest... I just don't really get what you're saying.

Over all though, I want you to keep in mind, I was just pointing out potential flaws or areas that this theory needed to reach further and make assumptions for it to work. I've said it once and I'll say it a million times, we don't have the data to make any determination of what's actually more or less likely. I just personally like less complicated theories that make less assumptions, so that what I believe in. But it doesn't really matter, and there's no like, solid ground here for any belief. So even when I'm poking holes in theory, that doesn't mean I'm saying the theory is impossible or that those holes need to be filled (but it's nice if they are of course) just that it makes it, in my view, less likely, but not in a quantitative way.
Welcome to Novel. Some of the things they say, along with the phrasing, make me think they are a chatbot.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Criptfeind

  • Bay Watcher
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #16 on: August 29, 2022, 09:54:38 am »

It's a little more involved than that and there's a justification, I'll explain it better when I make a thread for Project Stardust, but yeah as I said it's a little contrived because I want to write about astropolitics and not humanity getting killed or adopted by StarGods. I just feel it makes for a more interesting setting if everyone is on roughly equal footing, you know?

Oh yeah, and I totally agree here, that's certainly normally my favorite type of setting as well. It certainly makes for a very interesting and diverse galaxy, at least imo. So I'm looking forward to eventually reading about your setting.
Logged

Quarque

  • Bay Watcher
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #17 on: August 29, 2022, 10:02:39 am »

Note that the Fermi paradox is not "why are we not seeing Aliens?".
The Fermi paradox is: "If aliens have had hundreds of millions of years to spread, why aren't they here yet?"

No, it's literally "Where is everyone?"

https://en.wikipedia.org/wiki/Fermi_paradox

The first paragraph of that wiki article:

Quote
Chain of reasoning
The following are some of the facts and hypotheses that together serve to highlight the apparent contradiction:

There are billions of stars in the Milky Way similar to the Sun.[7][8]
With high probability, some of these stars have Earth-like planets in a circumstellar habitable zone.[9]
Many of these stars, and hence their planets, are much older than the Sun.[10][11] If the Earth is typical, some may have developed intelligent life long ago.
Some of these civilizations may have developed interstellar travel, a step humans are investigating now.
Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be completely traversed in a few million years.[12]
Since many of the stars similar to the Sun are billions of years older, Earth should have already been visited by extraterrestrial civilizations, or at least their probes. [emphasis mine] [13]
However, there is no convincing evidence that this has happened.[12]


In my hard SF setting, the solution is "civilizations kill themselves and each other some thousands of years after attaining interstellar travel, and the new cycle has just started". Which is, of course, a narrative contrivance due to FTL being a thing in the setting while we don't have any evidence for it being feasible to attain (even via Alcubierrie drives etc). But this conversation reminded me of that.
Are you writing a story, or is this for a roleplaying campaign?
« Last Edit: August 29, 2022, 10:04:44 am by Quarque »
Logged

Great Order

  • Bay Watcher
  • [SCREAMS_INTERNALLY]
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #18 on: August 29, 2022, 10:23:58 am »

To talk about the ones that Great order brought up, a civilization turning in on itself for uploading has two issues as an answer to the "paradox" first is that it requires that every member of that civilization takes part in the upload, that there's no Amish equivalent that doesn't do this and instead keeps expanding, if there is, then it doesn't really matter how small this subculture is, if they keep expanding they'll eventually overtake the old culture relatively rapidly. And it becomes difficult to enforce such a choice on all of society if that society has started to spread to multiple stars. The second and even bigger issue for that as an answer is that even uploaded simulated minds still are dependent on the outside universe for upkeep, expansion, and security. It doesn't answer why they haven't consumed the galaxy to create more server banks and energy collectors. Why they don't destroy or contact us on purpose or on accident. The same goes for robots turning everyone into paper clips, why would a paper clip maximizer stop at just it's parent civilization instead of expanding until the whole galaxy is paper clips?
Those are all hypotheticals, the truth is once a civilisation becomes advanced enough it's entirely possible they'll be dealing with issues we can't even conceive in our current forms. Who knows what they might be? Might as well try to inform an ant about Roko's Basilisk.

The point was they might face something that removes them from the physical universe as a force, either by literally removing them or functionally doing so.
Logged
Quote
I may have wasted all those years
They're not worth their time in tears
I may have spent too long in darkness
In the warmth of my fears

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #19 on: August 29, 2022, 10:36:15 am »

In my hard SF setting, the solution is "civilizations kill themselves and each other some thousands of years after attaining interstellar travel, and the new cycle has just started". Which is, of course, a narrative contrivance due to FTL being a thing in the setting while we don't have any evidence for it being feasible to attain (even via Alcubierrie drives etc). But this conversation reminded me of that.
Are you writing a story, or is this for a roleplaying campaign?
The latter (I am running a RP on Discord) but I plan to write stories in it too.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #20 on: August 29, 2022, 10:37:06 am »

Like all paradoxes, this one originates from poor initial assumptions.
Logged

TamerVirus

  • Bay Watcher
  • Who cares
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #21 on: August 29, 2022, 10:58:17 am »

What if consensus reality is nothing but a simulation and there just isn't enough processing power to maintain more than one sapient species?

What if the 5th dimensional astral reptilians that keep us ensnared on this physical plane didn't purchase the DLC for the intergalactic federation module?
Logged
What can mysteriously disappear can mysteriously reappear
*Shakes fist at TamerVirus*

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #22 on: August 29, 2022, 11:12:59 am »

Again, but with quotes and clarity.

If non organic intelligence proves both vastly superior to organic intelligence and unable to co-exist, it might make sense for a society to replace itself (or ascend itself, depending on your point of view) with a successor society it creates out of non organic intelligence specifically to solve the problem of the potential for an outside non organic intelligence destroying their society utterly.

You ask why it would be a threat: even if you build your own to protect against someone elses, you're by definition building it "blind". You can't know what way a self-improving AI should develop itself at the levels it will be reaching. When somethings moving individual quarks around the way we move around gears, for example, a builder species would have no clue what it "should" do. It's all the AI decision; and it's methodology is not guaranteed to be perfect.

Simple example: there's a million ways to build a house on a open plain. A ant can't grasp this; a human can do so, but by no means fully understands it. It's no different with an AI. It steps onto a entire, new, wide open plain of the universe and "decides" what house to build, with information that is quite possibly as incomplete and imperfect, in relative terms, as a human doing the same.

If another AI is built, that operates differently, it's possible it could far surpass your own by simply happening to choose a different architectural style on a whim. Due to complete coincidence, the locals like it more, it keeps heat better because the green rocks you liked are good insulators... you see where I'm going with this?

Their success would be no less determined by chance then our own is.

(if) there is some value in alien intelligence, you can simply colonize the galaxy and then wait a couple of million or billion years. All the daughter societies created by your colonization will probably then be as alien to you as any aliens from another planet.

You want the aliens to be as alien as possible, so the intelligence is as varied as possible. You'd recognize Entropy is a hard problem; so, for the best chance of finding the right species to crack it, you'd be willing to take the risk.

In this specific version non organic intelligence is apparently enough to defeat an already secured organic intelligence society, but at the same time they are able to stop that from happening for billions of years without resorting to wiping out any possible seeds for it?

It doesn't have to be literally billions of years; just long enough that you start taking the long-term problems seriously. There's always going to be some fucking sentient species that is actually sane, weird and lucky enough to survive at a high tech level over a long period. That seems a bit contrived? Well, it just seems that without ridiculous edge cases the default answer is "stomped by AI." And

Space is big


I admit, it's a leap. But, if it's necessary to disrupt your species out of developing AI; which it, does, indeed, seem to be - it begs the question of what that species does next.

And it would result in a Fermi Paradox like situation. Though I admit; this is a secondary theory.
« Last Edit: August 29, 2022, 11:29:28 am by Scoops Novel »
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

NJW2000

  • Bay Watcher
  • You know me. What do I know?
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #23 on: August 29, 2022, 11:16:49 am »

1. Killshots are a dumb idea because they are far too risky. By the time it reaches your target, they are likely to have other colonies and technology to launch their own killshot. It's mutually-assured destruction.
2. I wouldn't expect aliens to be particularly similar to Earth life, but some things are just inherent to life. If anything, we definitely would recognize their constructions as artificial.
1. relies on a lot of assumptions about alien races with superior technology, which I don't think you're qualified to make. It's not a possibility we can rule out because "uhh MAD", any more than we can rule out actual nuclear conflict on earth.

2. I don't agree, for several reasons. The most obvious is that whatever a more technologically advanced race constructs or evolves may simply be beyond us. They may also have developed in a way that makes distinctions between artificial and natural obsolete or meaningless.
Logged
One wheel short of a wagon

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #24 on: August 29, 2022, 11:37:47 am »

1. Killshots are a dumb idea because they are far too risky. By the time it reaches your target, they are likely to have other colonies and technology to launch their own killshot. It's mutually-assured destruction.
2. I wouldn't expect aliens to be particularly similar to Earth life, but some things are just inherent to life. If anything, we definitely would recognize their constructions as artificial.
1. relies on a lot of assumptions about alien races with superior technology, which I don't think you're qualified to make. It's not a possibility we can rule out because "uhh MAD", any more than we can rule out actual nuclear conflict on earth.

2. I don't agree, for several reasons. The most obvious is that whatever a more technologically advanced race constructs or evolves may simply be beyond us. They may also have developed in a way that makes distinctions between artificial and natural obsolete or meaningless.
1. Sure it's possible, but to say it's inevitable that killshots would be fired is making even more assumptions.
2.1 At that point, if they are indistinguishable from nature, they are beyond the possibility of contact so that point is moot wrt Fermy Paradox. I suppose that's a possible solution.
2.2 Technology that looks like natural objects will still look artificial in how it's arranged. And the energy emissions would still be noticeable due to the fundamental laws of physics.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

King Zultan

  • Bay Watcher
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #25 on: August 30, 2022, 03:20:37 am »

Can anyone figure out what the point Novel is trying to make, is he afraid of AI stuff that doesn't even exist yet?
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #26 on: August 30, 2022, 03:27:09 am »

Can anyone figure out what the point Novel is trying to make, is he afraid of AI stuff that doesn't even exist yet?
Pretty much, it's a common theme in their threads, like some sort of chatbot trained on LessWrong posts.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Quarque

  • Bay Watcher
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #27 on: August 30, 2022, 04:02:19 am »

Can anyone figure out what the point Novel is trying to make, is he afraid of AI stuff that doesn't even exist yet?
Summarized in one sentence: what if there were no hypothetical questions?
Logged

Magmacube_tr

  • Bay Watcher
  • Praise KeK! For He is The Key and The Gate!
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #28 on: August 30, 2022, 07:00:13 am »

Can anyone figure out what the point Novel is trying to make, is he afraid of AI stuff that doesn't even exist yet?
Pretty much, it's a common theme in their threads, like some sort of chatbot trained on LessWrong posts.

Maybe he really is a hyperadvanced chatbot. Wouldn't be surprised.
Logged
I must submerge myself in MAGMAAAAAAAAA! daily for 17 cents, which I detest. With a new profile picture!

My gaem. JOIN NAOW!!!

My sigtext. Read if you dare!

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: Solutions to the Fermi Paradox
« Reply #29 on: August 30, 2022, 06:56:33 pm »

Can anyone figure out what the point Novel is trying to make, is he afraid of AI stuff that doesn't even exist yet?
Pretty much, it's a common theme in their threads, like some sort of chatbot trained on LessWrong posts.

Maybe he really is a hyperadvanced chatbot. Wouldn't be surprised.
I've occasionally suspected that, but Novel's forum join date means it's unlikely. Probably just a lost human climbing the mountain of enlightenment. Say hi to the Sage when you get to top, Novel!
Pages: 1 [2] 3 4