



where can i buy one
If you appeal to heat death then you cannot say brains pop back into existence either because “matter has a finite life,” and so it is self-defeating. If brains can pop back into existence due to random fluctuations then surely planets and stars could as well given enough time.
It seems more likely in a universe that is infinitely large that brains would come into existence through simpler deterministic processes like they did on earth than random fluctuations no?


Einstein didn’t even get a nobel prize for special relativity because it was considered too radical at the time.
He shouldn’t have gotten one for SR specifically anyways because Hendrik Lorentz had already developed a theory that was mathematically equivalent and presented a year prior to Einstein.
The speed of light can be derived from Maxwell’s equations, which is weird to be able to derive a speed just by analyzing how electromagnetism works, because anyone in any reference frame would derive the same speed, which implies the existence of a universal speed. If the speed is universal, what it is universal relative to?
Physicists prior to Einstein believed there might be a universal reference frame which defines absolute time and absolute space, these days called a preferred foliation. The Michelson-Morley experiment was an attempt to measure the existence of this preferred foliation because most theories of how it worked would render it detectable in principle, but found no evidence for it.
Most physicists these days retell this experiment as having debunked the idea and led to its replacement with Einstein’s special relativity. But the truth is more complicated than that, because Lorentz found you could patch the idea by just assuming objects physically contract based on their motion relative to preferred foliation. Lorentz’s theory was presented in 1904, a year before Einstein, and was mathematically equivalent, so it makes all the same predictions, and so anything Einstein’s theory would predict, his theory would’ve also predicted.
The reason Lorentz’s theory fell by the wayside is because, by being able to explain the results of the Michelson-Morley experiment which was meant to detect the preferred foliation, it meant it was no longer detectable, and so people liked Einstein’s theory more that threw out this undetectable aspect. But it would still be weird to give Einstein the Nobel prize for what is ultimately just a simplification of Lorentz’s theory. (Einstein also already received one for something he did deserve anyways.)
But there are also good reasons these days to consider putting the preferred foliation back in and that Lorentz was right. The Friedmann solution to Einstein’s general relativity (the solution associated with the universe we actually live in) spontaneously gives rise to a preferred foliation which is actually empirically observable. You can measure your absolute motion relative to the universe by looking at the cosmic dipole in the cosmic background radiation. Since we know you can measure it now and have actually measured our absolute motion in the universe, the argument against Lorentz’s theory is much weaker.
An even stronger argument, however, comes from quantum mechanics. A famous theorem by the physicist John Bell proves the impossibility of “local realism,” and in this case locality means locality in terms of special relativity, and realism means belief that particles have real states in the real physical world independently of you looking at them (called the ontic states) which explain what shows up on your measurement device when you try to measure them. Since many physicists are committed to the idea of special relativity, they conclude that Bell’s theorem must debunk realism, that objective reality does not exist independently of you looking at it, and devolve into bizarre quantum mysticism and weirdness.
But you can equally interpret this to mean that special relativity is wrong and that the preferred foliation needs to put back in. The physicist Hrvoje Nikolic for example published a paper titled “Relativistic QFT from a Bohmian perspective: A proof of concept” showing that you can fit quantum mechanics to a realist theory that reproduces the predictions of relativistic quantum mechanics if you add back in a preferred foliation.


“Why” implies an underlying ontology. Maybe there is something underneath it but it’s as far as it goes down as far as we currently know. If we don’t at least tentatively accept that our current most fundamental theories are the fundamental ontology of nature, at least as far as we currently know, then we can never believe anything about nature at all, because it would be an infinite regress. Every time we discover a new theory we can ask “well why does it work like that?” and so it would be impossible to actually believe anything about nature.


What is the distinction you are making between knowing the math and understanding it?


There are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.
You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.
In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.
The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.
This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.
— Dmitry Blokhintsev
Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.
A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.


I got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.
These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.
If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.
You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.


Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.
This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.
The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.
Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.


There is no limit to entanglement as everything is constantly interacting with each other and spreading the entanglement around. That is in fact what decoherence is about, because spreading the entanglement throughout trillions of particles in the environment dilutes it such that quantum interference effects are to subtle to notice, but they are all technically entangled. So if you think entanglement means things are one entity, then you pretty much have to treat the whole universe as one entity. That was the position of Bohm and Blokhintsiev.
the world is run by PDF files


Mathematics is just a language to describe patterns we observe in the world. It really is not fundamentally more different from English or Chinese, it is just more precise so there is less ambiguity as to what is actually being claimed, so if someone makes a logical argument with the mathematics, they cannot use vague buzzwords with unclear meaning disallowing it from it actually being tested.
Mathematics just is a language that forces you to have extreme clarity, but it is still ultimately just a language all the same. Its perfect consistency hardly matters. What matters is that you can describe patterns in the world with it and use it to identify those patterns in a particular context. If the language has some sort of inconsistency that disallows it from being useful in a particular context, then you can just construct a different language that is more useful in that context.
It’s of course, preferable that it is more consistent than not so it is applicable to as many contexts as possible without having to change up the language, but absolute perfect pure consistency is not necessarily either.
ChatGPT just gives the correct answer that the limit doesn’t exist.


Speed of light limitation. Andromeda is 2.5 million light years away. Even if someone debunks special relativity and finds you could go faster than light, you would be moving so fast relative to cosmic dust particles that it would destroy the ship. So, either way, you cannot practically go faster than the speed of light.
The only way we could have intergalactic travel is a one-way trip that humanity here on earth would be long gone by the time it reached its destination so we could never know if it succeeded or not.


Historically they often actually have the reverse effect.
Sanctions aren’t subtle, they aren’t some sneaky way of hurting a country and so the people blame the government and try to overthrow it. They are about as subtle as bombing a country then blaming the government. Everyone who lives there sees directly the impacts of the sanctions and knows the cause is the foreign power. When a foreign power is laying siege on a country, then it often has the effect of strengthening people’s support for the government. Even the government’s flaws can be overlooked because they can point to the foreign country’s actions to blame.
Indeed, North Korea is probably the most sanctioned country in history yet is also one of the most stable countries on the planet.
I thought it was a bit amusing when Russia seized Crimea and the western world’s brilliant response was to sanction Crimea as well as to shut down the water supply going to Crimea, which Russia responded by building one of the largest bridges in Europe to facilitate trade between Russia and Crimea as well as investing heavily into building out new water infrastructure.
If a foreign country is trying to starve you, and the other country is clearly investing a lot of money into trying to help you… who do you think you are winning the favor of with such a policy?
For some reason the western mind cannot comprehend this. They constantly insist that the western world needs to lay economic siege on all the countries not aligned with it and when someone points out that this is just making people of those countries hate the western world and want nothing to do with them and strengthening the resolve of their own governments, they just deflect by calling you some sort of “apologist” or whatever.
Indeed, during the Cuban Thaw when Obama lifted some sanctions, Obama became rather popular in Cuba, to the point that his approval ratings at times even surpassed that of Fidel, and Cuba started to implement reforms to allow for further economic cooperation with US government and US businesses. They were very happy to become an ally of the US, but then suddenly Democrats and Republicans decided to collectively do a 180 u-turn and abandon all of that and destroy all the good will that have built up.
But the people of Cuba are not going to capitulate because the government is actually popular, as US internal documents constantly admits to, and that popularity will only be furthered by the increased blockade. US is just going to create a North Korean style scenario off the coast of the US.
Depends upon what you mean by realism. If you just mean belief in a physical reality independent of a conscious observer, I am not really of the opinion you need MWI to have a philosophically realist perspective.
For some reason, everyone intuitively accepts the relativity of time and space in special relativity as an ontological feature of the world, but when it comes to the relativity of the quantum state, people’s brains explode and they start treating it like it has to do with “consciousness” or “subjectivity” or something and that if you accept it then you’re somehow denying the existence of objective reality. I have seen this kind of mentality throughout the literature and it has never made sense to me.
Even Eugene Wigner did this, when he proposed the “Wigner’s friend” thought experiment, he points out how two different observers can come to describe the same system differently, and then concludes that proves quantum mechanics is deeply connected to “consciousness.” But we have known that two observers can describe the same system differently since Galileo first introduced the concept of relativity back in 1632. There is no reason to take it as having anything to do with consciousness or subjectivity or anything like that.
(You can also treat the wavefunction nomologically as well, and then the nomological behavior you’d expect from particles would be relative, but the ontological-nomological distinction is maybe getting too much into the weeds of philosophy here.)
I am partial to the way the physicist Francois-Igor Pris puts it. Reality exists as independently of the conscious observer, but not independently from context. You have to specify the context in which you are making an ontological claim for it to have physical meaning. This context can be that of the perspective of a conscious observer, but nothing about the observer is intrinsic here, what is intrinsic is the context, and that is just one of many possible contexts an ontological claim can be made. Two observers can describe the same train to be traveling at different velocities, not because they are conscious observers, but because they are describing the same train from different contexts.
The philosopher Jocelyn Benoist and the physicist Francois-Igor Pris have argued that the natural world does have a kind of an inherent observer-observed divide but that these terms are misleading being “subject” tends to imply a human subject and “observer” tends to imply a conscious observer, and that a lot of the confusion is cleared up once you figure out how to describe this divide in a more neutral, non-anthropomorphic way, which they settle on talking about the “reality” and the “context.” The reality of the velocity of the train will be different in different contexts. You don’t have to invoke “observer-dependence” to describe relativity. Hence, you can indeed describe quantum theory as a theory of physical reality independent of the observer.
MWI very specifically commits to the existence of a universal wavefunction. Everett’s original paper is literally titled “The Theory of the Universal Wavefunction.” If you instead only take relative states seriously, that position is much closer to relational quantum mechanics. In fact, Carlo Rovelli explicitly describes RQM as adopting Everett’s relative-state idea while rejecting the notion of a universal quantum state.
MWI claims there exists a universal quantum state, but quantum theory works perfectly well without this assumption if quantum states are taken to be fundamentally relative. Every quantum state is defined in relation to something else, which is made clear by the Wigner’s friend scenario where different observers legitimately assign different states to the same system. If states are fundamentally relative, then a “universal” quantum state makes about as much sense as a “universal velocity” in Galilean relativity.
You could arbitrarily choose a reference frame in Galilean relativity and declare it universal, but this requires an extra postulate, is unnecessary for the theory, and is completely arbitrary. Likewise, you could pick some observer’s perspective and call that the universal wavefunction, but there is no non-arbitrary reason to privilege it. That wavefunction would still be relative to that observer, just with special status assigned by fiat.
Worse, such a perspective could never truly be universal because it could not include itself. To do that you would need another external perspective, leading to infinite regress. You never obtain a quantum state that includes the entire universe. Any state you define is always relative to something within the universe, unless you define it relative to something outside of the universe, but at that point you are talking about God and not science.
The analogy to Galilean relativity actually is too kind. Galilean relativity relies on Euclidean space as a background, allowing an external viewpoint fixed to empty coordinates. Hilbert space is not a background space at all; it is always defined in terms of physical systems, what is known as a constructed space. You can transform perspectives in spacetime, but there is no transformation to a background perspective in Hilbert space because no such background exists. The closest that exists is a statistical transformation to different perspectives within Liouville space, but this only works for objects within the space; you cannot transform to the perspective of the background itself as it is not a background space.
One of the papers I linked also provides a no-go theorem as to why a universal quantum state cannot possibly exist in a way that would be consistent with relative perspectives. There are just so many conceptual and mathematical problems with a universal wavefunction. Even if you somehow resolve them all, your solution will be far more convoluted than just taking the relative states of quantum mechanics at face value. There is no need to “explain measurement” or introduce a many worlds or a universal wavefunction if you just accept the relative nature of the theory at face value and move on, rather than trying to escape it (for some reason).
But this is just one issue. The other elephant in the room is the fifth point that even if you construct a theory that is at least mathematically consistent, it still would contain no observables. MWI is a “theory” which lacks observables entirely.