• 1 Post
  • 341 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle

  • I disagree with you, because a modern human could offer the people of the distant past (with their far less advanced technology) solutions to their problems which would seem miraculous to them. Things that they thought were impossible would be easy for the modern human. The computer may do the same for us, with a solution to climate change that would be, as you put it, magically ecological.

    With that said, the computer wouldn’t be giving humans suggestions. It would be the one in charge. Imagine a group of chimpanzees that somehow create a modern human. (Not a naked guy with nothing, but rather someone with all the knowledge we have now.) That human isn’t going to limit himself to answering questions for very long. This isn’t a perfect analogy because chimpanzees don’t comprehend language, but if a human with a brain just 3.5 times the size of a chimpanzee’s can do so much more than a chimpanzee, a computer with calculational capability orders of magnitude greater than a human’s could be a god compared to us. (The critical thing is to make it a loving god; humans haven’t been good to chimpanzees.)


  • I don’t think you’re imagining the same thing they are when you hear the word “AI”. They’re not imagining a computer that prints out a new idea that is about as good as the ideas that humans have come up with. Even that would be amazing (it would mean that a computer could do science and engineering about as well as a human) but they’re imagining a computer that’s better than any human. Better at everything. It would be the end of the world as we know it, and perhaps the start of something much better. In any case, climate change wouldn’t be our problem anymore.


  • The article compares coal and natural gas based on thermal energy and does not take into account the greater efficiency of natural-gas power plants. According to Yale the efficiency of a coal power plant is 32% and that of a natural gas power plant is 44%. This means that to generate the same amount of electricity, you need 38% more thermal energy from coal than you would from natural gas. I’m surprised that the author neglects this given his focus on performing a full lifecycle assessment.

    Natural gas becomes approximately equal to coal after efficiency is corrected for, using the author’s GWP20 approach. GWP20 means that the effect of global warming is calculated for a 20 year timescale. The author argues that this is the appropriate timescale to use, but he also presents data for the more conventional GWP100 approach, and when this data is adjusted for efficiency, coal is about 25% worse than natural gas.

    I’m not an expert so I can’t speak authoritatively about GWP20 vs GWP100 but I suspect GWP100 is more appropriate in this case. Carbon dioxide is a stable gas but methane degrades fairly quickly. Its lifetime in the atmosphere is approximately 10 years. This means that while a molecule of carbon dioxide can keep trapping heat forever, a molecule of methane will trap only a finite amount of heat. This effect is underestimated using GWP20.

    Edit: Also the Guardian shouldn’t be calling this a “major study”. It’s one guy doing some fairly basic math and publishing in a journal that isn’t particularly prestigious.


  • My guess is that they didn’t answer your question because they had strict instructions not to stray from the script on this topic. Saying the wrong thing could lead to a big PR problem, so I don’t expect that people working in this field would be willing to have a candid public discussion even about topics to which they have given a lot of thought. I do expect that they have given the ability of AI to obey orders accurately a lot of thought at least due to practical (if not ethical) concerns.

    I mean, I am currently willing to say “the AIs will almost definitely kill civilians but we should build them anyway” because I don’t work in defense. However, even I’m a little nervous saying that because one day I might want to. My friends who do work in defense have told me that the people who gave them clearance did investigate their online presence. (My background is in computational biochemistry but I look at what’s going on in AI and I feel like nothing else is important in comparison.)

    As for cold comfort: I think autonomous weapons are inevitable in the same way that the atom bomb was inevitable. Even if no one wants to see it used, everyone wants to have it because enemies will. However, I don’t see a present need for strategic (as opposed to tactical) automation. A computer would have an advantage in battlefield control but strategy takes hours or days or years and so a human’s more reliable ability to reason would be more important in that domain.

    Once a computer can reason better than a human can, that’s the end of the world as we know it. It’s also inevitable like the atom bomb.




  • Despite media speculation, Israel is not currently planning to strike Iran’s nuclear facilities, according to four Israeli officials, even though Israel sees Iran’s efforts to create a nuclear weapons program as an existential threat. Targeting nuclear sites, many of which are deep underground, would be hard without U.S. support. President Biden said Wednesday that he would not support an attack by Israel on Iranian nuclear sites.

    I wonder what the strategy here is, given that the USA also wants to prevent Iran from having nuclear weapons. Is the implication here that the USA will not enable an attack on Iranian nuclear facilities as long as Iran doesn’t actually try to build a bomb? How confident are Israel and the USA that Iran can’t build a bomb in secret? Is there a way Iran could retaliate against an attack on its nuclear facilities but not against an attack on other major targets?


  • a Ghost Robotics Vision 60 Quadrupedal-Unmanned Ground Vehicle, or Q-UGV, armed with what appears to be an AR-15/M16-pattern rifle on rotating turret undergoing “rehearsals” at the Red Sands Integrated Experimentation Center in Saudi Arabia

    They’re not being used in combat.

    With that aside, I appear to be the only one here who thinks this is a great idea. AI can make mistakes, but the goal isn’t perfection. It’s just to make fewer mistakes than a human soldier does. (Or at least fewer mistakes than a bomb does, which is really easy.)

    Plus, automation can address the problem Western countries have with unconventional warfare, which is that Western armies are much less willing to have soldiers die than their opponents are. Sufficiently determined guerrillas who can tolerate high losses can inflict slow but steady losses on Western armies until the Western will to fight is exhausted. If robots can take the place of human infantry, the advantage shifts back from guerrillas to countries with high-tech manufacturing capability.



  • the size/capability of violence

    That’s, uh, not a small difference. Even if you’re saying that one man’s terrorist is another man’s freedom fighter, neither the terrorist nor the freedom fighter are comparable to a large, powerful country.

    Edit: One more interesting difference is that because a country has a much greater capability to wage war, it also has much more to lose in war: it can lose that very capability. A small group of irregular fighters does not depend much on infrastructure, but a country has population centers, factories, military bases, the seat of government, etc. which are all vulnerable in a way that a hidden cave or tunnel isn’t. We’re seeing the effects of this distinction between Iran and its proxies play out right now.





  • Are you implying that Israel’s much greater number of attacks are because they are doing really tiny attacks or something?

    No, I’m just saying the graph is probably useless. Israel definitely is launching more and larger attacks, because that’s how you win a war. Ideally Hezbollah would be launching zero attacks because Israel launched the massive number of attacks necessary to cripple Hezbollah. A little red bar, then a big blue bar, and finally no red bar at all.

    Israel is doing bigger strikes with less concern for civilian casualties.

    Is this a joke? Hezbollah usually attacks with unguided rockets. This demonstrates zero concern for civilian casualties. Less than zero, actually, because the intent of the attacks is to cause civilian casualties. Relatively few Israeli civilians have died because Israel is successfully defending them, not because Hezbollah’s policy regarding Israeli civilians is different from that of Hamas.

    A cease fire in Gaza would achieve this.

    Even if that is true (and it would only be true in the short term) then Israel would still be foolish to make major concessions to its persistent enemies when it has the military power necessary not to. Meanwhile Hezbollah would be more inclined to launch future attacks because it would see that they worked.






  • Let’s put the issue of Israel aside and consider slavery in the USA before the Civil War. There was plenty of oppression but effectively no resistance. The deadliest slave revolt (for white people) involved about 60 casualties before all the slaves involved were quickly captured and executed, and this revolt was so out of the ordinary that it shocked the nation. Almost all human beings do not in fact rise up against their oppressors when they think that doing so will just get them killed. When there’s no power vacuum left by a weak central government, an organized insurgency has no room to form and so people will tolerate anything.

    The idea that human nature includes an unquenchable flame of defiance may be appealing but it is simply false. Otherwise we’d see insurgents in North Korea.