25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 1.26K Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle
  • I’ve noticed, at least with the model I occasionally use, that the best way I’ve found to consistently get western eyes isn’t to specify round eyes or to ban almond-shaped eyes, but to make the character blonde and blue eyed (or make them a cowgirl or some other stereotype rarely associated with Asian women). If you want to generate a western woman with straight black hair, you are going to struggle.

    I’ve also noticed that is you want a chest smaller than DDD, it’s almost impossible with some models — unless you specify that they are a gymnast. The model makers are so scared of generating a chest that could ever be perceived as less than robustly adult, that just generating realistic proportions is impossible by default. But for some reason gymnasts are given a pass, I guess.

    This can be addressed with LORAs and other tools, but every time you run into one of these hard associations, you have to assemble a bunch of pictures demonstrating the feature you want, and the images you choose better not be too self-consistent or you might accidentally bias some other trait you didn’t intend to.

    Contrast a human artist who can draw whatever they imagine without having to translate it into AI terms or worry about concept-bleed. Like, I want portrait-style, but now there are framed pictures in the background of 75% of the gens, so instead I have to replace portrait with a half-dozen other words: 3/4 view, posed, etc.

    Hard association is one of the tools AI relies on — a hand has 5 fingers and is found at the end of an arm, etc. The associations it makes are based on the input images, and the images selected or available are going to contain other biases just because, for example, there are very few examples of Asian woman wearing cowboy hats and lassoing cattle.

    Now, I rarely have any desire to generate images, so I’m not playing with cutting edge tools. Maybe those are a lot better, but I’d bet they’ve simply mitigated the issues, not solved them entirely. My interest lies primarily in text gen, which has similar issues.



  • We started putting our shit up almost immediately after Halloween. I don’t mind all the gaudy bullshit, just the work and storage space. I just want to put up projector lights. My wife complains that they look like someone didn’t put any effort in — I said that’s exactly why I like them. At least we were able to agree on a prelit tree with no extra ornaments. I do miss the extravagant trees my grandma put up when I was little but it’s so much breakable glass shit.

    Last year I put up permanent Govee lights. They were pretty good but then we had our roof redone this fall and I noticed half of them don’t work now. C’est la vie.


  • The people releasing public models aren’t the ones doing this for profit. Mostly. I know OpenAI and DeepSeek both have. Guess I’ll have to go look up who trained GLM, but I suspect the resources will always be there to push the technology forward at a slower pace. People will learn to do more with less resources and that’s where the bulk of the gains will be made.

    Edit: A Chinese university trained GLM. Which is the sort of place where I expect research will continue to be done.


  • I pay for it. One of the services I pay is about $25/mo and they release about one update a year or so. It’s not cutting edge, just specialized. And they are making a profit doing a bit of tech investment and running the service, apparently. But also they are just tuning and packaging a publicly available model, not creating their own.

    What can’t be sustained is this sprint to AGI or to always stay at the head of the pack. It’s too much investment for tiny gains that ultimately don’t move the needle a lot. I guess if the companies all destroy one another until only one remains, or someone really does attain AGI, they will realize gains. I’m not sure I see that working out, though.



  • I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just by how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.

    Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?




  • what is the message to the audience? That ChatGPT can investigate just as well as BBC.

    What about this part?

    Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.

    Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.


  • Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I’m wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn’t be.

    My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.



  • A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

    What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?

    A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged

    So they did. Why are we talking about ChatGPT then? You could just leave that part out. It’s useless. Obviously a fake photo has been manipulated. Why bother asking?



  • It’s openings, not employment. Which is why I asked whether the charts pasted here are showing employment or openings. And why I complained that the chart cuts off everything pre-Covid. If employment is going down, that’s a problem. If job openings are going down, it isn’t AI but a regression to mean. This video is the same jobs trend looked at through a different lens. It’s pretty clear and logical that the demand for more seasoned professionals is more static that for juniors.

    This is numbers taken from public data and put into context, and I don’t think the fact that it’s posted on TikTok is relevant to the math. TikTok just has a better algorithm for discovery for me and that’s where I saw this guy’s work and started following him, and the length of short form video helps the content not exceed attention span.

    That all being said, if employment of juniors is trending down and not just reverting to mean, then I agree with the consolation this is a doomsday scenario cooking over the next 40 years. I have been saying for a couple of years that’s a concern to watch out for. But so far I haven’t seen numbers that concern me. I’ll be continuing to watch this space closely because it’s directly related to my interests.






  • I don’t like being recreationally high. I have tried several times. I’ll take cannabis to help me sleep when I have insomnia, so it’s not just being anti-drug, I just don’t like it. My son in law thinks weed is the greatest thing in the world. Sometimes I do it with him just to be social.

    The last time I partook, I sat in a chair for the entirety of a documentary about that sub implosion trying to decide if I needed to go throw up and whether I could make it without bouncing off walls and possibly the floor. By the end, I barely managed to take my ass to the guest room and fall asleep.

    On the other hand, alcohol helps me cope with social anxiety.