the world is run by PDF files
the world is run by PDF files


Mathematics is just a language to describe patterns we observe in the world. It really is not fundamentally more different from English or Chinese, it is just more precise so there is less ambiguity as to what is actually being claimed, so if someone makes a logical argument with the mathematics, they cannot use vague buzzwords with unclear meaning disallowing it from it actually being tested.
Mathematics just is a language that forces you to have extreme clarity, but it is still ultimately just a language all the same. Its perfect consistency hardly matters. What matters is that you can describe patterns in the world with it and use it to identify those patterns in a particular context. If the language has some sort of inconsistency that disallows it from being useful in a particular context, then you can just construct a different language that is more useful in that context.
It’s of course, preferable that it is more consistent than not so it is applicable to as many contexts as possible without having to change up the language, but absolute perfect pure consistency is not necessarily either.
ChatGPT just gives the correct answer that the limit doesn’t exist.


Speed of light limitation. Andromeda is 2.5 million light years away. Even if someone debunks special relativity and finds you could go faster than light, you would be moving so fast relative to cosmic dust particles that it would destroy the ship. So, either way, you cannot practically go faster than the speed of light.
The only way we could have intergalactic travel is a one-way trip that humanity here on earth would be long gone by the time it reached its destination so we could never know if it succeeded or not.


Historically they often actually have the reverse effect.
Sanctions aren’t subtle, they aren’t some sneaky way of hurting a country and so the people blame the government and try to overthrow it. They are about as subtle as bombing a country then blaming the government. Everyone who lives there sees directly the impacts of the sanctions and knows the cause is the foreign power. When a foreign power is laying siege on a country, then it often has the effect of strengthening people’s support for the government. Even the government’s flaws can be overlooked because they can point to the foreign country’s actions to blame.
Indeed, North Korea is probably the most sanctioned country in history yet is also one of the most stable countries on the planet.
I thought it was a bit amusing when Russia seized Crimea and the western world’s brilliant response was to sanction Crimea as well as to shut down the water supply going to Crimea, which Russia responded by building one of the largest bridges in Europe to facilitate trade between Russia and Crimea as well as investing heavily into building out new water infrastructure.
If a foreign country is trying to starve you, and the other country is clearly investing a lot of money into trying to help you… who do you think you are winning the favor of with such a policy?
For some reason the western mind cannot comprehend this. They constantly insist that the western world needs to lay economic siege on all the countries not aligned with it and when someone points out that this is just making people of those countries hate the western world and want nothing to do with them and strengthening the resolve of their own governments, they just deflect by calling you some sort of “apologist” or whatever.
Indeed, during the Cuban Thaw when Obama lifted some sanctions, Obama became rather popular in Cuba, to the point that his approval ratings at times even surpassed that of Fidel, and Cuba started to implement reforms to allow for further economic cooperation with US government and US businesses. They were very happy to become an ally of the US, but then suddenly Democrats and Republicans decided to collectively do a 180 u-turn and abandon all of that and destroy all the good will that have built up.
But the people of Cuba are not going to capitulate because the government is actually popular, as US internal documents constantly admits to, and that popularity will only be furthered by the increased blockade. US is just going to create a North Korean style scenario off the coast of the US.
Depends upon what you mean by realism. If you just mean belief in a physical reality independent of a conscious observer, I am not really of the opinion you need MWI to have a philosophically realist perspective.
For some reason, everyone intuitively accepts the relativity of time and space in special relativity as an ontological feature of the world, but when it comes to the relativity of the quantum state, people’s brains explode and they start treating it like it has to do with “consciousness” or “subjectivity” or something and that if you accept it then you’re somehow denying the existence of objective reality. I have seen this kind of mentality throughout the literature and it has never made sense to me.
Even Eugene Wigner did this, when he proposed the “Wigner’s friend” thought experiment, he points out how two different observers can come to describe the same system differently, and then concludes that proves quantum mechanics is deeply connected to “consciousness.” But we have known that two observers can describe the same system differently since Galileo first introduced the concept of relativity back in 1632. There is no reason to take it as having anything to do with consciousness or subjectivity or anything like that.
(You can also treat the wavefunction nomologically as well, and then the nomological behavior you’d expect from particles would be relative, but the ontological-nomological distinction is maybe getting too much into the weeds of philosophy here.)
I am partial to the way the physicist Francois-Igor Pris puts it. Reality exists as independently of the conscious observer, but not independently from context. You have to specify the context in which you are making an ontological claim for it to have physical meaning. This context can be that of the perspective of a conscious observer, but nothing about the observer is intrinsic here, what is intrinsic is the context, and that is just one of many possible contexts an ontological claim can be made. Two observers can describe the same train to be traveling at different velocities, not because they are conscious observers, but because they are describing the same train from different contexts.
The philosopher Jocelyn Benoist and the physicist Francois-Igor Pris have argued that the natural world does have a kind of an inherent observer-observed divide but that these terms are misleading being “subject” tends to imply a human subject and “observer” tends to imply a conscious observer, and that a lot of the confusion is cleared up once you figure out how to describe this divide in a more neutral, non-anthropomorphic way, which they settle on talking about the “reality” and the “context.” The reality of the velocity of the train will be different in different contexts. You don’t have to invoke “observer-dependence” to describe relativity. Hence, you can indeed describe quantum theory as a theory of physical reality independent of the observer.
MWI very specifically commits to the existence of a universal wavefunction. Everett’s original paper is literally titled “The Theory of the Universal Wavefunction.” If you instead only take relative states seriously, that position is much closer to relational quantum mechanics. In fact, Carlo Rovelli explicitly describes RQM as adopting Everett’s relative-state idea while rejecting the notion of a universal quantum state.
MWI claims there exists a universal quantum state, but quantum theory works perfectly well without this assumption if quantum states are taken to be fundamentally relative. Every quantum state is defined in relation to something else, which is made clear by the Wigner’s friend scenario where different observers legitimately assign different states to the same system. If states are fundamentally relative, then a “universal” quantum state makes about as much sense as a “universal velocity” in Galilean relativity.
You could arbitrarily choose a reference frame in Galilean relativity and declare it universal, but this requires an extra postulate, is unnecessary for the theory, and is completely arbitrary. Likewise, you could pick some observer’s perspective and call that the universal wavefunction, but there is no non-arbitrary reason to privilege it. That wavefunction would still be relative to that observer, just with special status assigned by fiat.
Worse, such a perspective could never truly be universal because it could not include itself. To do that you would need another external perspective, leading to infinite regress. You never obtain a quantum state that includes the entire universe. Any state you define is always relative to something within the universe, unless you define it relative to something outside of the universe, but at that point you are talking about God and not science.
The analogy to Galilean relativity actually is too kind. Galilean relativity relies on Euclidean space as a background, allowing an external viewpoint fixed to empty coordinates. Hilbert space is not a background space at all; it is always defined in terms of physical systems, what is known as a constructed space. You can transform perspectives in spacetime, but there is no transformation to a background perspective in Hilbert space because no such background exists. The closest that exists is a statistical transformation to different perspectives within Liouville space, but this only works for objects within the space; you cannot transform to the perspective of the background itself as it is not a background space.
One of the papers I linked also provides a no-go theorem as to why a universal quantum state cannot possibly exist in a way that would be consistent with relative perspectives. There are just so many conceptual and mathematical problems with a universal wavefunction. Even if you somehow resolve them all, your solution will be far more convoluted than just taking the relative states of quantum mechanics at face value. There is no need to “explain measurement” or introduce a many worlds or a universal wavefunction if you just accept the relative nature of the theory at face value and move on, rather than trying to escape it (for some reason).
But this is just one issue. The other elephant in the room is the fifth point that even if you construct a theory that is at least mathematically consistent, it still would contain no observables. MWI is a “theory” which lacks observables entirely.
The Many Worlds interpretation is rather unconvincing to me for many reasons.
|1| It claims it is “simpler” just by dropping the Born rule, but it is mathematically impossible to derive the Born rule from the Schrodinger equation alone. You must include some additional assumption to derive it, and so it ends up necessarily having to introduce an additional postulate at some point to derive the Born rule from. Its number of assumptions thus always equal that of any other interpretation but with additional mathematical complexity caused by the derivation.
|2| It claims to be “local” because there is no nonlocal wavefunction collapse. But the EPR paper already proves it’s mathematically impossible for something to match the predictions of quantum theory and be causally local if there are no hidden variables. This is obscured by the fact that MWI proponents like to claim the Born rule probabilities are a subjective illusion and not physically real, but illusions still have a physical cause that need to be physically explained, and any explanation you give must reproduce Born rule probabilities, and thus must violate causal locality. Some MWI proponents try to get around this by redefining locality in terms of relativistic locality, but even Copenhagen is local in that sense, so you end up with no benefits over Copenhagen if you accept that redefinition.
|3| It relies on belief that there exists an additional mathematical entity Ψ as opposed to just ψ, but there exists no mathematical definition or derivation of this entity. Even Everett agreed that all the little ψ we work with in quantum theory are relative states, but then he proposes that there exists an absolute universal Ψ, but to me this makes about as much sense as claiming there exists a universal velocity in Galilean relativity. There is no way to combine relative velocities to give you a universal velocity, they are just fundamentally relative. Similarly, wavefunctions in quantum mechanics are fundamentally relative. A universal wavefunction does not meaningfully exist.
|4| You describe MWI as kind of a copying of the world into different branches where different observers see different outcomes of the experiment, but that is not what MWI actually claims. MWI claims the Born rule is a subjective illusion and all that exists is the Schrodinger equation, but the Schrodinger equation never branches. If, for example, a photon hits a beam splitter with a 50% chance of passing through and a 50% chance of being reflected and you have a detector on either side, the Schrodinger equation will never evolve into a state that looks anything like it having past through or it having been reflected, nor will it ever evolve into a state that looks anything like it having past through and it having been reflected. The state it evolves into is entirely disconnected from the discrete states we actually observe except through the Born rule. Indeed, even those probabilities I gave you come from the Born rule.
This was something Einstein pointed out in relation to atomic decay, that no matter how long you evolve the Schrodinger equation, it never evolves into a state that looks anything like decay vs non-decay. You never get to a state that looks like either or, both, or neither. You end up with something entirely unrecognizable from what we would actually observe in an experiment, only connected back to the probabilities of decay vs non-decay by the Born rule. If the universe really is just the Schrodinger equation, you simply cannot say that it branches into two “worlds” where in one you see one outcome and in another you see a different outcome, because the Schrodinger equation never gives you that. You would have to claim that the entire world consists of a single evolving infinite-dimensional universal wavefunction that is nothing akin to anything we have ever observed before.
There is a good lecture below by Maudlin on this problem, that MWI presents a theory which has no connection to observable reality because nothing within the theory contains any observables.
Rovelli also comments on it:
The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world. The Many Worlds interpretation does not explain them clearly. It is not enough to know the ψ wave and Schrödinger’s equation in order to define and use quantum theory: we need to specify an algebra of observables, otherwise we cannot calculate anything and there is no relation with the phenomena of our experience. The role of this algebra of observables, which is extremely clear in other interpretations, is not at all clear in the Many Worlds interpretation.
— Carlo Rovelli, “Helgoland: Making Sense of the Quantum Revolution”
There is a lot of confusion because physicists changed the meaning of “locality” since the EPR paper to refer to relativistic locality (sending information faster than light) which was not what Einstein was on about. Einstein’s locality is probably most succently summarized as such:
In this case, assume a bunch of particles are interacting, and S is the state of a system of interacting particles prior to the interaction, and S’ is the state of the system of interacting particles after the interaction. We then want to look at the variance (statistical spread) of the probability distribution of S’ preconditioned on S, that is to say, a prediction of the state of the system after the interaction given complete knowledge of the state of the system prior to interaction, and then compare that to the variance of another prediction where we precondition both on S and x, where x is the state of something outside of the system of interacting particles.
If a theory is local, then the two should always be equal for any possible value of x. This is because the outcome of a local interaction should only be determined by everything participating in the local interaction, that is to say, S, so preconditioning on complete knowledge of the initial states of everything participating in the interaction should give you sufficient knowledge to predict the outcome of the interaction, that is to say, S’, to best that is physically possible.
If you can include something outside of the interaction, that is to say, x, and it can improve your prediction further, then it must be nonlocal because it contains irreducible dependence upon something not involved in the interaction.
The point about the EPR paper is that if you don’t assume hidden variables, then this definition of locality is broken. Two entangled particles are said to be ontologically in a superposition of states, meaning, having complete knowledge on their states prior to the measurement interaction can only predict them both with a distribution of 50%/50%, but if you precondition on knowledge of an observer’s measurement far away, then you can improve your prediction as to your measurement of your local particle to 100% certainty, which violates this locality condition.
This is still local in the classical case where the only reason you could improve your prediction is because you were ignorant of the initial state of the particle to begin with, so you never preconditioned on the complete initial state of the system to begin with. Hence, adding hidden variables would, supposedly, restore this notion of locality, which we can call causal locality as opposed to relativistic locality.
What Bell’s theorem proves is that adding hidden variables does not restore causal locality. This is because, as he proves, in quantum mechanics, the state of an individual particle in a collection of entangled particles can have dependence upon the configuration of a collection of measurement devices, even though it only ever interacts with an individual measurement device. That means this violation of causal locality is intrinsic to the mathematics of the theory and is not something that just arises due to a lack of hidden variables.
Even worse, as Bell says, adding hidden variables appears to make it “grossly nonlocal,” which by that he meant it violates relativistic locality as well. At least without introducing something like superdeterminism or retrocausality.


It’s… literally the opposite. The giant AI models with trillions of parameters are not something you can run without spending many thousands of dollars, and quantum computers cost millions. These are definitely not services that are going to fall into the hands of everyday people. At best you get small AI models.


The reason quantum computers are theoretically faster is because of the non-separable nature of quantum systems.
Imagine you have a classical computer where some logic gates flip bits randomly, and multi-bit logic gates could flip them randomly but in a correlated way. These kinds of computers exist and are called probabilistic computers and you can represent all the bits using a vector and the logic gates with matrices called stochastic matrices.
The vector necessarily is non-separable, meaning, you cannot get the right predictions if you describe the statistics of the computer with a vector assigned to each p-bit separately, but must assign a single vector to all p-bits taken together. This is because the statistics can become correlated with each other, i.e. the statistics of one p-bit depends upon another, and thus if you describe them using separate vectors you will lose information about the correlations between the p-bits.
The p-bit vector grows in complexity exponentially as you add more p-bits to the system (complexity = 2^N where N is the number of p-bits), even though the total states of all the p-bits only grows linearly (complexity = 2N). The reason for this is purely an epistemic one. The physical system only grows in complexity linearly, but because we are ignorant of the actual state of the system (2N), we have to consider all possible configurations of the system (2^N) over an infinite number of experiments.
The exponential complexity arises from considering what physicists call an “ensemble” of individual systems. We are not considering the state of the physical system as it currently exists right now (which only has a complexity of 2N) precisely because we do not know the values of the p-bits, but we are instead considering a statistical distribution which represents repeating the same experiment an infinite number of times and distributing the results, and in such an ensemble the system would take every possible path and thus the ensemble has far more complexity (2^N).
This is a classical computer with p-bits. What about a quantum computer with q-bits? It turns out that you can represent all of quantum mechanics simply by allowing probability theory to have negative numbers. If you introduce negative numbers, you get what are called quasi-probabilities, and this is enough to reproduce the logic of quantum mechanics.
You can imagine that quantum computers consist of q-bits that can be either 0 or 1 and logic gates that randomly flip their states, but rather than representing the q-bit in terms of the probability of being 0 or 1, you can represent the qubit with four numbers, the first two associated with its probability of being 0 (summing them together gives you the real probability of 0) and the second two associated with its probability of being 1 (summing them together gives you the real probability of 1).
Like normal probability theory, the numbers have to all add up to 1, being 100%, but because you have two numbers assigned to each state, you can have some quasi-probabilities be negative while the whole thing still adds up to 100%. (Note: we use two numbers instead of one to describe each state with quasi-probabilities because otherwise the introduction of negative numbers would break L1 normalization, which is a crucial feature to probability theory.)
Indeed, with that simple modification, the rest of the theory just becomes normal probability theory, and you can do everything you would normally do in normal classical probability theory, such as build probability trees and whatever to predict the behavior of the system.
However, this is where it gets interesting.
As we said before, the exponential complexity of classical probability is assumed to merely something epistemic because we are considering an ensemble of systems, even though the physical system in reality only has linear complexity. Yet, it is possible to prove that the exponential complexity of a quasi-probabilistic system cannot be treated as epistemic. There is no classical system with linear complexity where an ensemble of that system will give you quasi-probabilistic behavior.
As you add more q-bits to a quantum computer, its complexity grows exponentially in a way that is irreducible to linear complexity. In order for a classical computer to keep up, every time an additional q-bit is added, if you want to simulate it on a classical computer, you have to increase the number of bits in a way that grows exponentially. Even after 300 q-bits, that means the complexity would be 2^N = 2^300, which means the number of bits you would need to simulate it would exceed the number of atoms in the observable universe.
This is what I mean by quantum systems being inherently “non-separable.” You cannot take an exponentially complex quantum system and imagine it as separable into an ensemble of many individual linearly complex systems. Even if it turns out that quantum mechanics is not fundamental and there are deeper deterministic dynamics, the deeper deterministic dynamics must still have exponential complexity for the physical state of the system.
In practice, this increase in complexity does not mean you can always solve problems faster. The system might be more complex, but it requires clever algorithms to figure out how to actually translate that into problem solving, and currently there are only a handful of known algorithms you can significantly speed up with quantum computers.
For reference: https://arxiv.org/abs/0711.4770


Multipolarity bros really need to stop fantasizing about BRICS being some sort of anti-imperialist military alliance It’s just delusion, BRICS isn’t a military bloc, it is about trade. It technically isn’t even a trading bloc either as it is literally just a forum to discuss trade relations. If you think any member of BRICS has defense obligations to one another then you are incredibly disconnected from reality.


Removed by mod


China obviously doesn’t give af about supplanting the US as a world power. If they did they would actually do stuff internationally. There is no Chinese equivalent to NATO. All they will do in regards to Venezuelan president being kidnapped is strongly condemn it. They won’t even offer PSUV any security guarantees. Literally all the Chinese government believes in is (1) trading with as many people as possible and (2) reuniting its breakaway territories. They have no ambitions beyond that. It is not true that China does business mostly with countries rejected by the Americans, but it does business with literally everyone. Chinese love to trade with everyone. While Americans media constantly criticizes China on every calling for regime change attacking their political system their leaders etc, the only time you ever hear criticism of the US on Chinese media is when the US does something that is viewed as harming trade, like the tariffs.


If you have a very noisy quantum communication channel, you can combine a second algorithm called quantum distillation with quantum teleportation to effectively bypass the quantum communication channel and send a qubit over a classical communication channel. That is the main utility I see for it. Basically, very useful for transmitting qubits over a noisy quantum network.


The people who named it “quantum teleportation” had in mind Star Trek teleporters which work by “scanning” the object, destroying it, and then beaming the scanned information to another location where it is then reconstructed.
Quantum teleportation is basically an algorithm that performs a destructive measurement (kind of like “scanning”) of the quantum state of one qubit and then sends the information over a classical communication channel (could even be a beam if you wanted) to another party which can then use that information to reconstruct the quantum state on another qubit.
The point is that there is still the “beaming” step, i.e. you still have to send the measurement information over a classical channel, which cannot exceed the speed of light.
There is no limit to entanglement as everything is constantly interacting with each other and spreading the entanglement around. That is in fact what decoherence is about, because spreading the entanglement throughout trillions of particles in the environment dilutes it such that quantum interference effects are to subtle to notice, but they are all technically entangled. So if you think entanglement means things are one entity, then you pretty much have to treat the whole universe as one entity. That was the position of Bohm and Blokhintsiev.