• 0 Posts
  • 9 Comments
Joined 1 month ago
cake
Cake day: May 18th, 2025

help-circle

  • Only if you completely redefine some aspect of the equation. You’d have to define “5” to actually mean “4” or change the meaning of “+” or “=” in some way that changes the operation. 2+2=4 isn’t just an abstract statement, it’s based on the way the physical world works. If you have 2 apples, and then I give you 2 more, you don’t suddenly have 5 apples because we all decided 2+2=5.

    Orwell’s meaning in 1984 wasn’t about belief changing the world, it was about the power of brainwashing and how fascism demands obedience.



  • Basically, physics says that nothing, not even information can actually travel faster than the speed of light. It’s a universal limit that shows up when you do the math on relativity. This concept is called “causality”.

    Because of this, FTL communication is probably impossible. Quantum entanglement seems like it could provide a loophole, but it doesn’t actually work that way. To actually use quantum entanglement for communication, it actually needs a confirmation message, which would have to be delivered by a different means (every quantum message needs a non-quantum confirmation). That confirmation would be bound by the speed of light, thus preserving causality.

    This is a very very rough description based on my memory, so some details may be a little off, but it should cover the gist. This article goes into more detail:

    https://bigthink.com/starts-with-a-bang/quantum-entanglement-faster-than-light/

    Edit: After reading, the answer is more that attempting to impart information onto the entangled particles to send a message necessarily breaks the entanglement and thus does not transmit the information to the other side. Entangling the particles makes their states related to each other, but only at the time of entanglement, and anything that changes either particle (including measuring it) will break the entanglement going forward.


  • I agree with most of the other comments here. Is actual AGI something to be worried about? I’m not sure. I don’t know if it’s even possible on our current technology path.

    Based on what I know, it’s almost certainly not going to come from the current crop of LLMs and related research. Despite many claims, they don’t actually think or reason. They’re just really complicated statistical models. And while they can do some interesting and impressive things, I don’t think there is any path of progression that will make them jump beyond what they currently are to actual intelligence.

    Could we develop something in my lifetime (the next 50-ish years or so for me)? Maybe. I think slim chances without a major shift, and I think it would take a public effort akin to the Manhattan Project and the Internet to achieve, but it’s possible. In the next 5 years? Definitely not, some random, massive, lucky break notwithstanding.

    As others have said here, even without AGI, current capitalist practices are already using the limited capabilities of LLMs to upend the labor market and put lots of people out of a job. Even when the LLMs can’t really replace the people effectively. But that’s not a problem with AI, it’s a problem with capitalism that happens with any kind of advancement. They’ll take literally any excuse to extract extra value.

    In summary, I wouldn’t worry about AGI. There’s so many other things that are problems now, and are already existential threats, that worrying about this big old “maybe in 50 years” isn’t really worth your time and energy.