• 0 Posts
  • 120 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle
  • Also Bell experiments have proven the indeterminacy which you say is absurd. No theory of local hidden variables can describe quantum mechanics.

    You say Bell’s theorem disproves realism, but then you immediately follow it up with saying it disproved local realism. Do you see how those two are not the same statements? It never even crossed Bell’s mind to deny reality. He believed that the conclusion to his own theorem is just that it is not local.

    (Technically, anything explained non-locally can also be explained non-temporally instead, so it is more accurate methinks to say spatiotemporal realism is ruled out. I am not as big of a fan of thinking about it non-temporally but there are some respectable people like Avshalom Elitzur who do. Thinking about it non-locally is far more intuitive.)

    Also, again, this is not about indeterminacy and determinacy, but about indefiniteness and definiteness, i.e. anti-realism vs realism. These are not the same things. To say something is indeterminate is merely to imply it is random. To say something is indefinite is to say it doesn’t even have a value at all. It is also sometimes called realism because it’s about object permanence. Definiteness is just object permanence, it is the idea that systems still possess observable properties even when they are not being directly observed in the moment.

    He’s asking where the line is between this indeterminacy and determinacy. At what scale to things move from quantum to “real” and why?

    You could in principle make this non-realism make sense if you imposed some sort of well-defined physical conditions as to when particles take on real values. Bell described this as a kind of “flash” ontology because you would not have continuous definite values but “flashes” of definite values under certain conditions. But it turns out that you cannot do this without contradicting the mathematics of quantum mechanics.

    These are called physical collapse models, like GRW theory, but these transitions are non-reversible even though all evolution operators in quantum mechanics are reversible, and so in principle if you rigorously define what conditions would cause this transition, you could conduct an experiment where you set up those conditions, and then try to reverse it. Orthodox quantum theory and the physical collapse model would make different predictions at that point.

    These models never end up being local, anyways.

    The reason I say value indefiniteness is absurd as a way to interpret quantum mechanics is because it is not necessitated by the mathematics at all, and if you believe it:

    1. It devolves into solipsism if you do not rigorously define a mathematical criterion as to when definite values arise, because then nothing has real values outside of you directly looking at it.
    2. If you do rigorously define a criterion, then it is no longer quantum mechanics but an alternative theoretical model.

    So, either it devolves into solipsism, or it is a different theory to begin with.

    Bell was fine with #2 as long as people were honest about that being what they were doing. He wrote an article “Against ‘Measurement’” where he criticized the vagueness of people who claim there is a transition “at measurement” but then do not even rigorously define what qualifies as a “measurement.” He wrote positively of GRW theory in his paper “Are there Quantum Jumps?” precisely because they do give a rigorous mathematical definition of how this process takes place.

    But Bell also didn’t particularly believe there was any reason to believe in value indefiniteness to begin with. You can just interpret quantum mechanics as a kind of stochastic mechanics, just one with non-local features, where it is random but particles still have definite values at all times. The same year he published his famous theorem in 1964 in the paper “On the Einstein Podolsky Rosen Paradox” he also published the paper “On the Problem of Hidden Variables” debunking von Neumann’s proof that supposedly you cannot interpret quantum mechanics in value definite terms. He also wrote a paper “Beables in Quantum Field Theory” where he shows QFT can be represented as a stochastic theory. He also wrote a paper “On the Impossible Pilot Wave” where he promoted pilot wave theory, not necessarily because he believed it, but because he saw it as a counterexample to all the supposed “proofs” that quantum mechanics cannot be interpreted as a value definite theory.

    My point isn’t about randomness/indeterminacy. It is about “indefiniteness,” the claim that things have no values until you look. This either devolves into solipsism, or into a theory which is not quantum mechanics. It is far simpler to just say the systems have values when you’re not looking, you just don’t know what they are, because the random evolution of the system prevents you from tracking them. It is sort of like, if I hit a fork in the road and take either the left or right path, and you don’t know which, you wouldn’t then conclude I didn’t take a path at all until you look. You would conclude that you just don’t know what it is, and maybe assign probabilities to them. The fact that the probability distribution doesn’t contain a definite value does not demonstrate that the real world doesn’t contain a definite value, and believing it doesn’t unnecessarily over-complicates things. And definite ≠ deterministic. Maybe the path taken is truly random, but there is a path taken.


  • Not to be the 🤓 but just so we’re clear, the point of Schrödinger’s cat was to illustrate that you can’t know a quantum state until you measure it. Basically just saying “probability exists.”

    That wasn’t Schrödinger’s point at all.

    Schrödinger was responding to people in Bohr and von Neumann’s camp who claim that particles described mathematically by a superposition of states literally have no real observables in the real world at all. It is not just that they are random or probabilistic, but people in the “anti-realist” camp argue that they effectively no longer even exist anymore when they are described mathematically by a superposition of states. This position is sometimes called value indefiniteness.

    Schrödinger was criticizing this position by pointing out that you cannot separate your beliefs about the microworld from the macroworld, because macroscopic objects like cats are also made up of particles and should follow the same rules. Hence, he puts forward a thought experiment whereby a cat would also be described mathematically in a superposition of states.

    If you think superposition of states means it no longer has real definite properties in the real world, then the cat wouldn’t have real define properties in the real world until you open the box. Schrödinger’s point was that this is such an obvious absurdity that we should reject value indefiniteness for individual particles as well.

    You say:

    The reason it’s a big deal is that this probability is a real property. One that is supposed to be only one of two states. But instead it isn’t really in a state at all until you measure it, and that’s weird.

    But that is exactly the point Schrödinger was criticizing, not supporting.

    Value indefiniteness / anti-realism ultimately amounts to solipsism because if particles lack real, definite, observable properties in the real world when you are not looking at them, other people are also made up of particles, so other people wouldn’t have real, definite, observable properties in the real world when you are not looking at them.

    He was trying to illustrate that this position reduces to an absurdity and so we should not believe in that position.

    The point is that instead of assuming it is in one state or the other, you can and often should think of both possibilities at once. This is what makes quantum computing useful.

    If you perform a polar decomposition on the quantum state, you are left with a probability vector and a phase vector. The probability vector is the same kind of probability vector you use in classical probabilistic computing. The update rule for it in quantum computing literally only differs by an additional term which is a non-linear term that depends upon the phase vector.

    The "advantage’ comes from the phase vector. For N qubits, there are 2^N phases. A system of 300 qubits would have 2^300 phases, which is far greater than the number of atoms in the observable universe. A single logic gate thus can manipulate far more states of the system at once because it can manipulate these phases, which the stochastic dynamics of the bits have a dependence upon the phases, and thus you can not only manipulate the phases to do calculations but, if you are clever, you can write the algorithm in such a way that the effect it has on the probability distribution allows you to read off the results from the probability distribution.

    The phase vector does not contain anything probabilistic, so it contains nothing that looks like the qubit being in two places at once. That is contained in the probability vector, but there is no good reason to interpret a probability distribution as the system being in two places at once in quantum mechanics than there is in classical mechanics. The advantage comes from the phases, and the state of the phases just can influence the stochastic perturbations of the bits, and thus can influence the probability distribution.

    So you simply apply operations that increase or decrease the chances of certain outcomes and repeat until the answer you want has an incredibly high probability and the rest are nearly zero. Then you measure your qubit, collapsing the wave function, with a high probability that collapse will give you the answer you wanted.

    Again, perform a polar decomposition on the quantum state, break it apart into the probability vector and a phase vector. Then, apply a Bayesian knowledge update using Bayes’ theorem to the probability vector, exactly the way you’d do it in classical probabilistic computing. Then, simply undo the polar decomposition, i.e. recompose it back into a single complex-valued vector in Cartesian form.

    What you find is that this is mathematically equivalent to the collapse of the wavefunction. The so-called “collapse of the wavefunction” is literally just a Bayesian knowledge update on the degree of freedom of the quantum state associated with the probability distribution of the bits.

    It’s less like “the cat is both alive and dead” and more that “the terms ‘alive’ and ‘dead’ do not apply to the cat till you open the box”

    Sure, but that position reduces to solipsism, because then you don’t exist with a definite value until I look at you, either. But clearly you are thinking definite thoughts when I’m not looking, right?


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzGottem
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    12 days ago

    They do have values. Their position is just a superposition, rather than one descrete one, which can be described as a wave. Their value is effectively a wave until it’s needed to be discrete.

    To quote Dmitry Blokhintsev: “This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.”

    When I say “real values” I do not mean pure abstract mathematics. We do not live in a Platonic realm. The mathematics are just a tool for predicting what we observe in the real world. Don’t confuse the map for the territory. The abstract wave has no observable properties, it is pure mathematics. If the whole world was just one giant wave in Hilbert space, then this would be equivalent to claiming that the entire world is just one big mathematical function without any observable properties at all, which obviously makes no sense as we can clearly observe the world.

    To quote Rovelli: “The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world.”

    Again, as I said in my first comment, any mathematical theory that describes the world needs to, at some point, include symbols which directly refer to something we can observe. An abstract mathematical function contains no such symbols. If you really believe that particles transform into purely mathematical waves, then you need some process to transform them back, or else you cannot explain what we observe at all, and so far the only process you have put forward is “it happens at every interaction” which is just objectively and empirically wrong because then entanglement would be impossible.

    This is why you run into contradictions like the “Wigner’s friend” paradox where Wigner would describe his friend in a superposition of states, and if you believe that this literally means that all that exists inside the room is an abstract function, then you cannot explain how the observer in the room can perceive anything that they later claim they do, because there would be no observables inside of the room.

    You cannot get around criticisms of solipsism by just promoting purely abstract mathematical entities to being “objective reality” as if objects transform into purely Platonic mathematical functions. At least, if you are going to claim this, then you need some rigorous process to transform them back into something that is described with mathematical language where some of the symbols refer to something we can actually observe such that we can then explain how it is that we can observe it to have the properties that it does when we look at it.

    Sure. That doesn’t make the general understanding of the thought experiment accurate. Once the decay of the atom that triggers the poison is detected, it’s no longer in a superposition. It has to not be in order for the detection to occur.

    Please scroll up and read my actual comment. You seem to have skipped all the important technical bits, because you are claiming something which is mathematically incompatible with the predictions of quantum mechanics. Your personal self-theory you are inventing here literally would render entanglement impossible.

    The double slit experiment shows that an interaction can change the result from wave-like to particle-like behavior.

    Decoherence is not relevant here. Decoherence theory works like this:

    1. Assume that the system+environment become entangled.
    2. Assume that the observer loses track of the environment.
    3. Trace out the the environment.
    4. This leaves you with a reduced density matrix for the system where the coherence terms have dropped to 0.

    Notice that step #2 is entirely subjective. We are just assuming that the observer has lost track of the environment in terms of their subjective epistemic access, and step #3 is then akin to statistically marginalizing over the environment in order to then remove it from consideration.

    This isn’t an actual physical transition but an epistemic one. The system+environment are still in a coherent superposition of states, and decoherence theory merely shows that it looks like it has decohered if you only have subjective knowledge on a small portion of the much larger coherent superposition of states.

    If you believe that a superposition of states means it has no observable properties and is just purely a mathematical function, then decoherence does not solve your problem at all, because it is ultimately a subjective process and not a physical process. If you spent time studying the environment enough before running the experiment such that you could include the environment in your model then decoherence would not occur.

    I’m literally not. My entire point is that it isn’t a solipsism. Any interaction causes the waveform to collapse.

    Which, again, renders entanglement impossible, since objects must interact to become entangled.

    If we accepted your personal self-theory, then quantum computers should be impossible, because the qubits all need to interact many many times over as the algorithm progresses for them to all become entangled and to create a superposition of states of the whole computer’s memory.

    You are not listening and advocating things that are trivially wrong.

    yet you give no explanation of an alternative. Something is happening. How do you explain it?

    I just don’t deny value definiteness. That’s it. There is nothing beyond this.

    Consider a perfectly classical world but this world is still fundamentally random. The randomness of interactions would disallow us from tracking the definite values of particles at a given moment in time, so we could only track them with an evolving probability distribution. We can represent this probability distribution with a vector and represent interactions with stochastic matrices. Given that the model does not include observable definite values, would it then be rational to claim that particles suddenly transform into an infinite-dimensional vector in configuration space when you’re not looking at them and lose all their observable properties? No, of course not. The particles still have real observable properties in the real world, but you just lose track of them in the model due to their random evolution.

    You could create a simulation where you assign definite values and permute them stochastically at each interaction, and this would produce the same statistical results if you make a measurement at any given step. It is the same with quantum mechanics. It is just a form of non-classical statistical mechanics. There is no empirical, mathematical, or philosophical reason to deny that particles stop possessing real values when you are not looking at them. It is not hard to put together a simulation where the qubits are assigned definite bit values at all times and each logic gate just stochastically permutes those bit values. I even created one myself here. John Bell also showed you can do this with quantum field theory in his paper “Beables for Quantum Field Theory.”


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzGottem
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    13 days ago

    Value indefiniteness is just solipsism. If particles do not have values when you are not looking, then any object made of particles also do not have values when you are not looking. This was the point of Schrodinger’s “cat” thought experiment. Your beliefs about the microworld inherently have implications for the macroworld. If particles don’t exist when you’re not looking at them, then neither do cats, or other people. This view of “value indefiniteness” you are trying to defend is indefensible because it is literally solipsism and any attempt to promote it above solipsism will just become incoherent.

    You say:

    it’s when position is needed to be known that causes it. Until then, the position is in a superstate of all possible positions, but for an interaction to occur it needs to be in one position.

    This is trivially false, because then it would not be possible for two particles to become entangled on the position basis, which requires them to interact in such a way that depends upon their position values. The other particle would thus need to “know” its position value to become entangled with it, and if this leads to a “collapse,” then such entanglement could not occur. Yet we know it can occur in experiments.

    If by “know” you mean humans knowing and not other particles, yeah, okay, but that’s obviously solipsism.

    Any attempt to defend value indefiniteness will always either amount to:

    1. Solipsism
    2. Something that is trivially wrong
    3. A theory which is not quantum mechanics (makes different predictions)

    This (at least your wording) implies that physics cares about our mathematical models. It doesn’t. Quantum mechanics and “classical” physics are just ways we organize things for education.

    I don’t blame them, it is literally the textbook Dirac-von Neumann axioms. That is how it is taught in schools, even though it is obviously incoherent. You are taught that there is a “Heisenberg cut” between the quantum and classical world, with no explanation of how this occurs.

    Though we don’t have a model for it, the unvirse is not using two separate models of physics. There is no “quantum mechanics” and “classical physics”. There is only physics.

    The problem is that the orthodox interpretation of quantum mechanics does not even allow you to derive classical physics minus gravity in a limiting case from quantum mechanics. It is not even a physical theory of nature at all.

    We know from the macroscopic world that particles have real observable properties, yet value indefiniteness denies that they have real observable properties, and it provides no method of telling you when those real, observable properties are added back to the world. It thus cannot make a single empirical prediction at all without this sleight-of-hand where they just say, as a matter of axiom in the Dirac-von Neumann textbook axioms of quantum mechanics that it happens “at measurement.”

    If measurement is taken to be a subjective observation, then it is just solipsism. If measurement is taken to be a physical process, then it cannot reproduce the mathematical predictions of quantum mechanics, because this “Heisenberg cut” would be a non-reversible process, yet all unitary evolution operators are reversible. Hence, any model which includes a rigorous definition of “measurement” (like Ghirardi–Rimini–Weber theory) would include an additional non-reversible process. You could then just imagine setting up an experiment where this process would occur and then try to reverse it. The mathematics of quantum mechanics and your theory would inevitably lead to different predictions in such a process.

    Therefore, again, if you believe in value indefiniteness, then you either (1) are a solipsist, (2) don’t believe in quantum mechanics but think it will be replaced by a physical collapse model, or (3) are confused.

    The only way for quantum mechanics to be self-consistent is to reject value indefiniteness, at least as a metaphysical point of view. This does not require actually modifying the mathematics. If nature is random, then of course the definite values will evolve statistically such that they could not be tracked and included in the model. All you would need to then demonstrate is that quantum statistics converges to classical statistics in a limiting case on macroscopic scales, which is achieved by the theory of decoherence.

    But the theory of decoherence achieves nothing if you believe in value indefiniteness, because if you believe quantum mechanics has nothing to do with statistics at all, then there is no reason to conclude that what you get in the reduced density matrices after you trace out the environment has anything to do with classical statistics, either.

    There is no good argument in the academic literature for value indefiniteness. It is an incoherent worldview based on no empirical evidence at all. People who believe it often just regurgitate mindlessly statements like “Bell’s theorem proves it!” yet cannot articulate what Bell’s theorem even is or how on earth is proves that, especially since Bell himself was the biggest critic of value indefiniteness yet wrote the damned theorem!


  • Indeed, to some extent, it has always been both necessary and proper for man, in his thinking, to divide things up, and to separate them, so as to reduce his problems to manageable proportions; for evidently, if in our practical technical work we tried to deal with the whole of reality all at once, we would be swamped…However, when this mode of thought is applied more broadly…then man ceases to regard the resulting divisions as merely useful or convenient and begins to see and experience himself and his world as actually constituted of separately existent fragments…fragmentation is continually being brought about by the almost universal habit of taking the content of our thought for ‘a description of the world as it is’. Or we could say that, in this habit, our thought is regarded as in direct correspondence with objective reality. Since our thought is pervaded with differences and distinctions, it follows that such a habit leads us to look on these as real divisions, so that the world is then seen and experienced as actually broken up into fragments.

    — David Bohm, “Wholeness and the Implicate Order”



  • bunchberry@lemmy.worldtoScience Memes@mander.xyzbig facts
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    If you appeal to heat death then you cannot say brains pop back into existence either because “matter has a finite life,” and so it is self-defeating. If brains can pop back into existence due to random fluctuations then surely planets and stars could as well given enough time.



  • Einstein didn’t even get a nobel prize for special relativity because it was considered too radical at the time.

    He shouldn’t have gotten one for SR specifically anyways because Hendrik Lorentz had already developed a theory that was mathematically equivalent and presented a year prior to Einstein.

    The speed of light can be derived from Maxwell’s equations, which is weird to be able to derive a speed just by analyzing how electromagnetism works, because anyone in any reference frame would derive the same speed, which implies the existence of a universal speed. If the speed is universal, what it is universal relative to?

    Physicists prior to Einstein believed there might be a universal reference frame which defines absolute time and absolute space, these days called a preferred foliation. The Michelson-Morley experiment was an attempt to measure the existence of this preferred foliation because most theories of how it worked would render it detectable in principle, but found no evidence for it.

    Most physicists these days retell this experiment as having debunked the idea and led to its replacement with Einstein’s special relativity. But the truth is more complicated than that, because Lorentz found you could patch the idea by just assuming objects physically contract based on their motion relative to preferred foliation. Lorentz’s theory was presented in 1904, a year before Einstein, and was mathematically equivalent, so it makes all the same predictions, and so anything Einstein’s theory would predict, his theory would’ve also predicted.

    The reason Lorentz’s theory fell by the wayside is because, by being able to explain the results of the Michelson-Morley experiment which was meant to detect the preferred foliation, it meant it was no longer detectable, and so people liked Einstein’s theory more that threw out this undetectable aspect. But it would still be weird to give Einstein the Nobel prize for what is ultimately just a simplification of Lorentz’s theory. (Einstein also already received one for something he did deserve anyways.)

    But there are also good reasons these days to consider putting the preferred foliation back in and that Lorentz was right. The Friedmann solution to Einstein’s general relativity (the solution associated with the universe we actually live in) spontaneously gives rise to a preferred foliation which is actually empirically observable. You can measure your absolute motion relative to the universe by looking at the cosmic dipole in the cosmic background radiation. Since we know you can measure it now and have actually measured our absolute motion in the universe, the argument against Lorentz’s theory is much weaker.

    An even stronger argument, however, comes from quantum mechanics. A famous theorem by the physicist John Bell proves the impossibility of “local realism,” and in this case locality means locality in terms of special relativity, and realism means belief that particles have real states in the real physical world independently of you looking at them (called the ontic states) which explain what shows up on your measurement device when you try to measure them. Since many physicists are committed to the idea of special relativity, they conclude that Bell’s theorem must debunk realism, that objective reality does not exist independently of you looking at it, and devolve into bizarre quantum mysticism and weirdness.

    But you can equally interpret this to mean that special relativity is wrong and that the preferred foliation needs to put back in. The physicist Hrvoje Nikolic for example published a paper titled “Relativistic QFT from a Bohmian perspective: A proof of concept” showing that you can fit quantum mechanics to a realist theory that reproduces the predictions of relativistic quantum mechanics if you add back in a preferred foliation.


  • “Why” implies an underlying ontology. Maybe there is something underneath it but it’s as far as it goes down as far as we currently know. If we don’t at least tentatively accept that our current most fundamental theories are the fundamental ontology of nature, at least as far as we currently know, then we can never believe anything about nature at all, because it would be an infinite regress. Every time we discover a new theory we can ask “well why does it work like that?” and so it would be impossible to actually believe anything about nature.



  • There are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.

    You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.

    In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.

    The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.

    This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.

    — Dmitry Blokhintsev

    Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.

    A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.


  • I got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.

    These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.

    If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.

    You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.


  • Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.

    This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.

    The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.

    Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.




  • Mathematics is just a language to describe patterns we observe in the world. It really is not fundamentally more different from English or Chinese, it is just more precise so there is less ambiguity as to what is actually being claimed, so if someone makes a logical argument with the mathematics, they cannot use vague buzzwords with unclear meaning disallowing it from it actually being tested.

    Mathematics just is a language that forces you to have extreme clarity, but it is still ultimately just a language all the same. Its perfect consistency hardly matters. What matters is that you can describe patterns in the world with it and use it to identify those patterns in a particular context. If the language has some sort of inconsistency that disallows it from being useful in a particular context, then you can just construct a different language that is more useful in that context.

    It’s of course, preferable that it is more consistent than not so it is applicable to as many contexts as possible without having to change up the language, but absolute perfect pure consistency is not necessarily either.



  • Speed of light limitation. Andromeda is 2.5 million light years away. Even if someone debunks special relativity and finds you could go faster than light, you would be moving so fast relative to cosmic dust particles that it would destroy the ship. So, either way, you cannot practically go faster than the speed of light.

    The only way we could have intergalactic travel is a one-way trip that humanity here on earth would be long gone by the time it reached its destination so we could never know if it succeeded or not.


  • Historically they often actually have the reverse effect.

    Sanctions aren’t subtle, they aren’t some sneaky way of hurting a country and so the people blame the government and try to overthrow it. They are about as subtle as bombing a country then blaming the government. Everyone who lives there sees directly the impacts of the sanctions and knows the cause is the foreign power. When a foreign power is laying siege on a country, then it often has the effect of strengthening people’s support for the government. Even the government’s flaws can be overlooked because they can point to the foreign country’s actions to blame.

    Indeed, North Korea is probably the most sanctioned country in history yet is also one of the most stable countries on the planet.

    I thought it was a bit amusing when Russia seized Crimea and the western world’s brilliant response was to sanction Crimea as well as to shut down the water supply going to Crimea, which Russia responded by building one of the largest bridges in Europe to facilitate trade between Russia and Crimea as well as investing heavily into building out new water infrastructure.

    If a foreign country is trying to starve you, and the other country is clearly investing a lot of money into trying to help you… who do you think you are winning the favor of with such a policy?

    For some reason the western mind cannot comprehend this. They constantly insist that the western world needs to lay economic siege on all the countries not aligned with it and when someone points out that this is just making people of those countries hate the western world and want nothing to do with them and strengthening the resolve of their own governments, they just deflect by calling you some sort of “apologist” or whatever.

    Indeed, during the Cuban Thaw when Obama lifted some sanctions, Obama became rather popular in Cuba, to the point that his approval ratings at times even surpassed that of Fidel, and Cuba started to implement reforms to allow for further economic cooperation with US government and US businesses. They were very happy to become an ally of the US, but then suddenly Democrats and Republicans decided to collectively do a 180 u-turn and abandon all of that and destroy all the good will that have built up.

    But the people of Cuba are not going to capitulate because the government is actually popular, as US internal documents constantly admits to, and that popularity will only be furthered by the increased blockade. US is just going to create a North Korean style scenario off the coast of the US.