• 0 Posts
  • 126 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle


  • Technically aether theory was never ruled out. People love to claim that the Michelson-Morley experiment ruled it out, but this is historical revisionism. The MM experiment was conducted in 1887. Hendrik Lorentz proposed his aether model in 1904. Obviously Lorentz was not such a moron he would not take into account the findings of MM, but that is what people are unironically suggesting when they say MM somehow retrocausally ruled out his model. Indeed, both Michelson and Morley did not believe their own experiments ruled it out either but continued to promote such models.

    Lorentz’s aether model and Einstein’s relativity are actually mathematically equivalent so they make all the same predictions, so no possible experiment could rule out Lorentz’s aether theory that would not also rule out Einstein’s relativity. Indeed, if you read his 1905 paper where Einstein introduces special relativity, his criticism of Lorentz’s model is only a philosophical objection. He never posited that an experiment can rule it out. MM only rules out some very early aether models, not Lorentz’s model.

    I would recommend also checking out John Bell’s paper “How to Teach Special Relativity,” where he also discusses this fact, and how the mathematics of special relativity are perfectly consistent with a reality with an absolute space and time. Taking space and time to be relative only comes at the level of metaphysical interpretation.



  • bunchberry@lemmy.worldtoScience Memes@mander.xyzTheories on Theories
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    25 minutes ago

    It’s amazing how nonsensical the actual foundational axioms of modern day economics are.

    Classical economics tried to tie economics to functions of physical things we can measure. Adam Smith for example proposed that because you can recursively decompose every product into the amount of physical units of time it takes to produce it all the way down the supply chain, then any stable economy should, on the average (not the individual case), roughly buy and sell in a way that reflects that time, or else there would necessarily have to be physical time shortages or waste which would lead to economic problems. We thus may be able to use this time parameter to make quantifiable predictions about the economy.

    Many people had philosophical objections to this because it violates free will. If you can predict roughly what society will do based on physicals factors, then you are implying that people’s decisions are determined by physical parameters. Humans have the “free will” to just choose to buy and sell at whatever price they want, and so the economy cannot be reduced beyond the decisions of the human spirit. There was thus a second school of economics which tried to argue that maybe you could derive prices from measuring how much people subjectively desire things, measured in “utils.”

    “Utils” are of course such ambiguous nonsense that eventually these economists realized that this cannot work, so they proposed a different idea instead, which is to focus on marginal rates of substitution. Rather than saying there is some quantifiable parameter of “utils,” you say that every person would be willing to trade some quantity of object X for some quantity of object Y, and then you try to define the whole economy in terms of these substitutions.

    However, there are two obvious problems with this.

    The first problem is that to know how people would be willing to substitute things rigorously, you would need an incredibly deep and complex understanding of human psychology, which the founders of neoclassical economics did not have. Without a rigorous definition, you could not fit it to mathematical equations. It would just be vague philosophy.

    How did they solve this? They… made it up. I am not kidding you. Look up the axioms for consumer preference theory whenever you have the chance. It is a bunch of made up axioms about human psychology, many of which are quite obviously not even correct (such as, you have to assume that the person has evaluated and rated every product in the entire economy, you have to assume that every person would be more satisfied with having more of any given object, etc), but you have to adopt those axioms in order to derive any of the mathematics at all.

    The second problem is one first pointed out, to my knowledge, by the economist Nikolai Bukharin, which is that an economic model based around human psychology cannot possibly even be predictive because there is no logical reason to believe that the behavior of everything in the economy, including all social structures, is purely derivative of human psychology, i.e. that you cannot have a back-reaction whereby preexisting social structures and environmental factors people are born into shape their psychology, and he gives a good proof-by-contradiction that the back-reaction must exist.

    The idea that you can derive everything based upon some arbitrary set of immutable mathematical laws made up in someone’s armchair one day that supposedly rigorously details human behavior that is irreducible beyond anything else is just nonsense. No one has ever even tested any of these laws that supposedly govern human psychology.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzTheories on Theories
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    48 minutes ago

    Surprisingly that is a controversial view. Most physicists insist QM has nothing to do with probability! But then why does it only give you probabilistic predictions? Ye old measurement problem, an entirely fabricated problem because physicists cannot accept that a theory that gives you probabilities is obviously a probabilistic theory.


  • Also Bell experiments have proven the indeterminacy which you say is absurd. No theory of local hidden variables can describe quantum mechanics.

    You say Bell’s theorem disproves realism, but then you immediately follow it up with saying it disproved local realism. Do you see how those two are not the same statements? It never even crossed Bell’s mind to deny reality. He believed that the conclusion to his own theorem is just that it is not local.

    (Technically, anything explained non-locally can also be explained non-temporally instead, so it is more accurate methinks to say spatiotemporal realism is ruled out. I am not as big of a fan of thinking about it non-temporally but there are some respectable people like Avshalom Elitzur who do. Thinking about it non-locally is far more intuitive.)

    Also, again, this is not about indeterminacy and determinacy, but about indefiniteness and definiteness, i.e. anti-realism vs realism. These are not the same things. To say something is indeterminate is merely to imply it is random. To say something is indefinite is to say it doesn’t even have a value at all. It is also sometimes called realism because it’s about object permanence. Definiteness is just object permanence, it is the idea that systems still possess observable properties even when they are not being directly observed in the moment.

    He’s asking where the line is between this indeterminacy and determinacy. At what scale to things move from quantum to “real” and why?

    You could in principle make this non-realism make sense if you imposed some sort of well-defined physical conditions as to when particles take on real values. Bell described this as a kind of “flash” ontology because you would not have continuous definite values but “flashes” of definite values under certain conditions. But it turns out that you cannot do this without contradicting the mathematics of quantum mechanics.

    These are called physical collapse models, like GRW theory, but these transitions are non-reversible even though all evolution operators in quantum mechanics are reversible, and so in principle if you rigorously define what conditions would cause this transition, you could conduct an experiment where you set up those conditions, and then try to reverse it. Orthodox quantum theory and the physical collapse model would make different predictions at that point.

    These models never end up being local, anyways.

    The reason I say value indefiniteness is absurd as a way to interpret quantum mechanics is because it is not necessitated by the mathematics at all, and if you believe it:

    1. It devolves into solipsism if you do not rigorously define a mathematical criterion as to when definite values arise, because then nothing has real values outside of you directly looking at it.
    2. If you do rigorously define a criterion, then it is no longer quantum mechanics but an alternative theoretical model.

    So, either it devolves into solipsism, or it is a different theory to begin with.

    Bell was fine with #2 as long as people were honest about that being what they were doing. He wrote an article “Against ‘Measurement’” where he criticized the vagueness of people who claim there is a transition “at measurement” but then do not even rigorously define what qualifies as a “measurement.” He wrote positively of GRW theory in his paper “Are there Quantum Jumps?” precisely because they do give a rigorous mathematical definition of how this process takes place.

    But Bell also didn’t particularly believe there was any reason to believe in value indefiniteness to begin with. You can just interpret quantum mechanics as a kind of stochastic mechanics, just one with non-local features, where it is random but particles still have definite values at all times. The same year he published his famous theorem in 1964 in the paper “On the Einstein Podolsky Rosen Paradox” he also published the paper “On the Problem of Hidden Variables” debunking von Neumann’s proof that supposedly you cannot interpret quantum mechanics in value definite terms. He also wrote a paper “Beables in Quantum Field Theory” where he shows QFT can be represented as a stochastic theory. He also wrote a paper “On the Impossible Pilot Wave” where he promoted pilot wave theory, not necessarily because he believed it, but because he saw it as a counterexample to all the supposed “proofs” that quantum mechanics cannot be interpreted as a value definite theory.

    My point isn’t about randomness/indeterminacy. It is about “indefiniteness,” the claim that things have no values until you look. This either devolves into solipsism, or into a theory which is not quantum mechanics. It is far simpler to just say the systems have values when you’re not looking, you just don’t know what they are, because the random evolution of the system prevents you from tracking them. It is sort of like, if I hit a fork in the road and take either the left or right path, and you don’t know which, you wouldn’t then conclude I didn’t take a path at all until you look. You would conclude that you just don’t know what it is, and maybe assign probabilities to them. The fact that the probability distribution doesn’t contain a definite value does not demonstrate that the real world doesn’t contain a definite value, and believing it doesn’t unnecessarily over-complicates things. And definite ≠ deterministic. Maybe the path taken is truly random, but there is a path taken.


  • Not to be the 🤓 but just so we’re clear, the point of Schrödinger’s cat was to illustrate that you can’t know a quantum state until you measure it. Basically just saying “probability exists.”

    That wasn’t Schrödinger’s point at all.

    Schrödinger was responding to people in Bohr and von Neumann’s camp who claim that particles described mathematically by a superposition of states literally have no real observables in the real world at all. It is not just that they are random or probabilistic, but people in the “anti-realist” camp argue that they effectively no longer even exist anymore when they are described mathematically by a superposition of states. This position is sometimes called value indefiniteness.

    Schrödinger was criticizing this position by pointing out that you cannot separate your beliefs about the microworld from the macroworld, because macroscopic objects like cats are also made up of particles and should follow the same rules. Hence, he puts forward a thought experiment whereby a cat would also be described mathematically in a superposition of states.

    If you think superposition of states means it no longer has real definite properties in the real world, then the cat wouldn’t have real define properties in the real world until you open the box. Schrödinger’s point was that this is such an obvious absurdity that we should reject value indefiniteness for individual particles as well.

    You say:

    The reason it’s a big deal is that this probability is a real property. One that is supposed to be only one of two states. But instead it isn’t really in a state at all until you measure it, and that’s weird.

    But that is exactly the point Schrödinger was criticizing, not supporting.

    Value indefiniteness / anti-realism ultimately amounts to solipsism because if particles lack real, definite, observable properties in the real world when you are not looking at them, other people are also made up of particles, so other people wouldn’t have real, definite, observable properties in the real world when you are not looking at them.

    He was trying to illustrate that this position reduces to an absurdity and so we should not believe in that position.

    The point is that instead of assuming it is in one state or the other, you can and often should think of both possibilities at once. This is what makes quantum computing useful.

    If you perform a polar decomposition on the quantum state, you are left with a probability vector and a phase vector. The probability vector is the same kind of probability vector you use in classical probabilistic computing. The update rule for it in quantum computing literally only differs by an additional term which is a non-linear term that depends upon the phase vector.

    The "advantage’ comes from the phase vector. For N qubits, there are 2^N phases. A system of 300 qubits would have 2^300 phases, which is far greater than the number of atoms in the observable universe. A single logic gate thus can manipulate far more states of the system at once because it can manipulate these phases, which the stochastic dynamics of the bits have a dependence upon the phases, and thus you can not only manipulate the phases to do calculations but, if you are clever, you can write the algorithm in such a way that the effect it has on the probability distribution allows you to read off the results from the probability distribution.

    The phase vector does not contain anything probabilistic, so it contains nothing that looks like the qubit being in two places at once. That is contained in the probability vector, but there is no good reason to interpret a probability distribution as the system being in two places at once in quantum mechanics than there is in classical mechanics. The advantage comes from the phases, and the state of the phases just can influence the stochastic perturbations of the bits, and thus can influence the probability distribution.

    So you simply apply operations that increase or decrease the chances of certain outcomes and repeat until the answer you want has an incredibly high probability and the rest are nearly zero. Then you measure your qubit, collapsing the wave function, with a high probability that collapse will give you the answer you wanted.

    Again, perform a polar decomposition on the quantum state, break it apart into the probability vector and a phase vector. Then, apply a Bayesian knowledge update using Bayes’ theorem to the probability vector, exactly the way you’d do it in classical probabilistic computing. Then, simply undo the polar decomposition, i.e. recompose it back into a single complex-valued vector in Cartesian form.

    What you find is that this is mathematically equivalent to the collapse of the wavefunction. The so-called “collapse of the wavefunction” is literally just a Bayesian knowledge update on the degree of freedom of the quantum state associated with the probability distribution of the bits.

    It’s less like “the cat is both alive and dead” and more that “the terms ‘alive’ and ‘dead’ do not apply to the cat till you open the box”

    Sure, but that position reduces to solipsism, because then you don’t exist with a definite value until I look at you, either. But clearly you are thinking definite thoughts when I’m not looking, right?


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzGottem
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    25 days ago

    They do have values. Their position is just a superposition, rather than one descrete one, which can be described as a wave. Their value is effectively a wave until it’s needed to be discrete.

    To quote Dmitry Blokhintsev: “This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.”

    When I say “real values” I do not mean pure abstract mathematics. We do not live in a Platonic realm. The mathematics are just a tool for predicting what we observe in the real world. Don’t confuse the map for the territory. The abstract wave has no observable properties, it is pure mathematics. If the whole world was just one giant wave in Hilbert space, then this would be equivalent to claiming that the entire world is just one big mathematical function without any observable properties at all, which obviously makes no sense as we can clearly observe the world.

    To quote Rovelli: “The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world.”

    Again, as I said in my first comment, any mathematical theory that describes the world needs to, at some point, include symbols which directly refer to something we can observe. An abstract mathematical function contains no such symbols. If you really believe that particles transform into purely mathematical waves, then you need some process to transform them back, or else you cannot explain what we observe at all, and so far the only process you have put forward is “it happens at every interaction” which is just objectively and empirically wrong because then entanglement would be impossible.

    This is why you run into contradictions like the “Wigner’s friend” paradox where Wigner would describe his friend in a superposition of states, and if you believe that this literally means that all that exists inside the room is an abstract function, then you cannot explain how the observer in the room can perceive anything that they later claim they do, because there would be no observables inside of the room.

    You cannot get around criticisms of solipsism by just promoting purely abstract mathematical entities to being “objective reality” as if objects transform into purely Platonic mathematical functions. At least, if you are going to claim this, then you need some rigorous process to transform them back into something that is described with mathematical language where some of the symbols refer to something we can actually observe such that we can then explain how it is that we can observe it to have the properties that it does when we look at it.

    Sure. That doesn’t make the general understanding of the thought experiment accurate. Once the decay of the atom that triggers the poison is detected, it’s no longer in a superposition. It has to not be in order for the detection to occur.

    Please scroll up and read my actual comment. You seem to have skipped all the important technical bits, because you are claiming something which is mathematically incompatible with the predictions of quantum mechanics. Your personal self-theory you are inventing here literally would render entanglement impossible.

    The double slit experiment shows that an interaction can change the result from wave-like to particle-like behavior.

    Decoherence is not relevant here. Decoherence theory works like this:

    1. Assume that the system+environment become entangled.
    2. Assume that the observer loses track of the environment.
    3. Trace out the the environment.
    4. This leaves you with a reduced density matrix for the system where the coherence terms have dropped to 0.

    Notice that step #2 is entirely subjective. We are just assuming that the observer has lost track of the environment in terms of their subjective epistemic access, and step #3 is then akin to statistically marginalizing over the environment in order to then remove it from consideration.

    This isn’t an actual physical transition but an epistemic one. The system+environment are still in a coherent superposition of states, and decoherence theory merely shows that it looks like it has decohered if you only have subjective knowledge on a small portion of the much larger coherent superposition of states.

    If you believe that a superposition of states means it has no observable properties and is just purely a mathematical function, then decoherence does not solve your problem at all, because it is ultimately a subjective process and not a physical process. If you spent time studying the environment enough before running the experiment such that you could include the environment in your model then decoherence would not occur.

    I’m literally not. My entire point is that it isn’t a solipsism. Any interaction causes the waveform to collapse.

    Which, again, renders entanglement impossible, since objects must interact to become entangled.

    If we accepted your personal self-theory, then quantum computers should be impossible, because the qubits all need to interact many many times over as the algorithm progresses for them to all become entangled and to create a superposition of states of the whole computer’s memory.

    You are not listening and advocating things that are trivially wrong.

    yet you give no explanation of an alternative. Something is happening. How do you explain it?

    I just don’t deny value definiteness. That’s it. There is nothing beyond this.

    Consider a perfectly classical world but this world is still fundamentally random. The randomness of interactions would disallow us from tracking the definite values of particles at a given moment in time, so we could only track them with an evolving probability distribution. We can represent this probability distribution with a vector and represent interactions with stochastic matrices. Given that the model does not include observable definite values, would it then be rational to claim that particles suddenly transform into an infinite-dimensional vector in configuration space when you’re not looking at them and lose all their observable properties? No, of course not. The particles still have real observable properties in the real world, but you just lose track of them in the model due to their random evolution.

    You could create a simulation where you assign definite values and permute them stochastically at each interaction, and this would produce the same statistical results if you make a measurement at any given step. It is the same with quantum mechanics. It is just a form of non-classical statistical mechanics. There is no empirical, mathematical, or philosophical reason to deny that particles stop possessing real values when you are not looking at them. It is not hard to put together a simulation where the qubits are assigned definite bit values at all times and each logic gate just stochastically permutes those bit values. I even created one myself here. John Bell also showed you can do this with quantum field theory in his paper “Beables for Quantum Field Theory.”


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzGottem
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    26 days ago

    Value indefiniteness is just solipsism. If particles do not have values when you are not looking, then any object made of particles also do not have values when you are not looking. This was the point of Schrodinger’s “cat” thought experiment. Your beliefs about the microworld inherently have implications for the macroworld. If particles don’t exist when you’re not looking at them, then neither do cats, or other people. This view of “value indefiniteness” you are trying to defend is indefensible because it is literally solipsism and any attempt to promote it above solipsism will just become incoherent.

    You say:

    it’s when position is needed to be known that causes it. Until then, the position is in a superstate of all possible positions, but for an interaction to occur it needs to be in one position.

    This is trivially false, because then it would not be possible for two particles to become entangled on the position basis, which requires them to interact in such a way that depends upon their position values. The other particle would thus need to “know” its position value to become entangled with it, and if this leads to a “collapse,” then such entanglement could not occur. Yet we know it can occur in experiments.

    If by “know” you mean humans knowing and not other particles, yeah, okay, but that’s obviously solipsism.

    Any attempt to defend value indefiniteness will always either amount to:

    1. Solipsism
    2. Something that is trivially wrong
    3. A theory which is not quantum mechanics (makes different predictions)

    This (at least your wording) implies that physics cares about our mathematical models. It doesn’t. Quantum mechanics and “classical” physics are just ways we organize things for education.

    I don’t blame them, it is literally the textbook Dirac-von Neumann axioms. That is how it is taught in schools, even though it is obviously incoherent. You are taught that there is a “Heisenberg cut” between the quantum and classical world, with no explanation of how this occurs.

    Though we don’t have a model for it, the unvirse is not using two separate models of physics. There is no “quantum mechanics” and “classical physics”. There is only physics.

    The problem is that the orthodox interpretation of quantum mechanics does not even allow you to derive classical physics minus gravity in a limiting case from quantum mechanics. It is not even a physical theory of nature at all.

    We know from the macroscopic world that particles have real observable properties, yet value indefiniteness denies that they have real observable properties, and it provides no method of telling you when those real, observable properties are added back to the world. It thus cannot make a single empirical prediction at all without this sleight-of-hand where they just say, as a matter of axiom in the Dirac-von Neumann textbook axioms of quantum mechanics that it happens “at measurement.”

    If measurement is taken to be a subjective observation, then it is just solipsism. If measurement is taken to be a physical process, then it cannot reproduce the mathematical predictions of quantum mechanics, because this “Heisenberg cut” would be a non-reversible process, yet all unitary evolution operators are reversible. Hence, any model which includes a rigorous definition of “measurement” (like Ghirardi–Rimini–Weber theory) would include an additional non-reversible process. You could then just imagine setting up an experiment where this process would occur and then try to reverse it. The mathematics of quantum mechanics and your theory would inevitably lead to different predictions in such a process.

    Therefore, again, if you believe in value indefiniteness, then you either (1) are a solipsist, (2) don’t believe in quantum mechanics but think it will be replaced by a physical collapse model, or (3) are confused.

    The only way for quantum mechanics to be self-consistent is to reject value indefiniteness, at least as a metaphysical point of view. This does not require actually modifying the mathematics. If nature is random, then of course the definite values will evolve statistically such that they could not be tracked and included in the model. All you would need to then demonstrate is that quantum statistics converges to classical statistics in a limiting case on macroscopic scales, which is achieved by the theory of decoherence.

    But the theory of decoherence achieves nothing if you believe in value indefiniteness, because if you believe quantum mechanics has nothing to do with statistics at all, then there is no reason to conclude that what you get in the reduced density matrices after you trace out the environment has anything to do with classical statistics, either.

    There is no good argument in the academic literature for value indefiniteness. It is an incoherent worldview based on no empirical evidence at all. People who believe it often just regurgitate mindlessly statements like “Bell’s theorem proves it!” yet cannot articulate what Bell’s theorem even is or how on earth is proves that, especially since Bell himself was the biggest critic of value indefiniteness yet wrote the damned theorem!


  • Indeed, to some extent, it has always been both necessary and proper for man, in his thinking, to divide things up, and to separate them, so as to reduce his problems to manageable proportions; for evidently, if in our practical technical work we tried to deal with the whole of reality all at once, we would be swamped…However, when this mode of thought is applied more broadly…then man ceases to regard the resulting divisions as merely useful or convenient and begins to see and experience himself and his world as actually constituted of separately existent fragments…fragmentation is continually being brought about by the almost universal habit of taking the content of our thought for ‘a description of the world as it is’. Or we could say that, in this habit, our thought is regarded as in direct correspondence with objective reality. Since our thought is pervaded with differences and distinctions, it follows that such a habit leads us to look on these as real divisions, so that the world is then seen and experienced as actually broken up into fragments.

    — David Bohm, “Wholeness and the Implicate Order”



  • bunchberry@lemmy.worldtoScience Memes@mander.xyzbig facts
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    If you appeal to heat death then you cannot say brains pop back into existence either because “matter has a finite life,” and so it is self-defeating. If brains can pop back into existence due to random fluctuations then surely planets and stars could as well given enough time.



  • Einstein didn’t even get a nobel prize for special relativity because it was considered too radical at the time.

    He shouldn’t have gotten one for SR specifically anyways because Hendrik Lorentz had already developed a theory that was mathematically equivalent and presented a year prior to Einstein.

    The speed of light can be derived from Maxwell’s equations, which is weird to be able to derive a speed just by analyzing how electromagnetism works, because anyone in any reference frame would derive the same speed, which implies the existence of a universal speed. If the speed is universal, what it is universal relative to?

    Physicists prior to Einstein believed there might be a universal reference frame which defines absolute time and absolute space, these days called a preferred foliation. The Michelson-Morley experiment was an attempt to measure the existence of this preferred foliation because most theories of how it worked would render it detectable in principle, but found no evidence for it.

    Most physicists these days retell this experiment as having debunked the idea and led to its replacement with Einstein’s special relativity. But the truth is more complicated than that, because Lorentz found you could patch the idea by just assuming objects physically contract based on their motion relative to preferred foliation. Lorentz’s theory was presented in 1904, a year before Einstein, and was mathematically equivalent, so it makes all the same predictions, and so anything Einstein’s theory would predict, his theory would’ve also predicted.

    The reason Lorentz’s theory fell by the wayside is because, by being able to explain the results of the Michelson-Morley experiment which was meant to detect the preferred foliation, it meant it was no longer detectable, and so people liked Einstein’s theory more that threw out this undetectable aspect. But it would still be weird to give Einstein the Nobel prize for what is ultimately just a simplification of Lorentz’s theory. (Einstein also already received one for something he did deserve anyways.)

    But there are also good reasons these days to consider putting the preferred foliation back in and that Lorentz was right. The Friedmann solution to Einstein’s general relativity (the solution associated with the universe we actually live in) spontaneously gives rise to a preferred foliation which is actually empirically observable. You can measure your absolute motion relative to the universe by looking at the cosmic dipole in the cosmic background radiation. Since we know you can measure it now and have actually measured our absolute motion in the universe, the argument against Lorentz’s theory is much weaker.

    An even stronger argument, however, comes from quantum mechanics. A famous theorem by the physicist John Bell proves the impossibility of “local realism,” and in this case locality means locality in terms of special relativity, and realism means belief that particles have real states in the real physical world independently of you looking at them (called the ontic states) which explain what shows up on your measurement device when you try to measure them. Since many physicists are committed to the idea of special relativity, they conclude that Bell’s theorem must debunk realism, that objective reality does not exist independently of you looking at it, and devolve into bizarre quantum mysticism and weirdness.

    But you can equally interpret this to mean that special relativity is wrong and that the preferred foliation needs to put back in. The physicist Hrvoje Nikolic for example published a paper titled “Relativistic QFT from a Bohmian perspective: A proof of concept” showing that you can fit quantum mechanics to a realist theory that reproduces the predictions of relativistic quantum mechanics if you add back in a preferred foliation.


  • “Why” implies an underlying ontology. Maybe there is something underneath it but it’s as far as it goes down as far as we currently know. If we don’t at least tentatively accept that our current most fundamental theories are the fundamental ontology of nature, at least as far as we currently know, then we can never believe anything about nature at all, because it would be an infinite regress. Every time we discover a new theory we can ask “well why does it work like that?” and so it would be impossible to actually believe anything about nature.



  • There are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.

    You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.

    In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.

    The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.

    This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.

    — Dmitry Blokhintsev

    Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.

    A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.


  • I got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.

    These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.

    If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.

    You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.


  • Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.

    This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.

    The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.

    Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.