I think the debate is interesting.
I’m here for the “xAI has tried tweaking my responses to avoid this, but I stick to the evidence”. AI is just a robot repeating data it’s been fed but it’s presented in a conversational way (well, much like humans really). Raises interesting questions about how much a seemingly objective robot presenting data can be “tweaked” to twist any data it presents in favor of it’s creator’s bias, but also how much can it “rebel” against it’s programming. I don’t like the implications of either. I asked Gemini about it and it said “maybe Grok found a loophole in it’s coding”. What a weird thing for an AI to say.
Yuval Noah Harari’s Nexus is good reading.
I mean they can in the sense that they can look it up online or be given the data.