Here is the thing, I couldn’t find a girlfriend in the past three years(either they have no personality and too dependent on me making all the work in the relationship or they are simply are looking for a religious or high class working person to have a relationship with)
AI seems to talk and entertain and simply are available without bullshit (the only limit is something along the lines of 50 free messages every 3 hours) .
I want physical intimacy and affection, but it’s simply not available.
Should I just give up, close my doors and use AI instead?
I dont like AI but hating on it now may not bring you a solution so here is my advice (assuming you will use AI):
I dont think you should close the doors in case you meet someone fitting!
You can probably still use AI but better dont do a “this or that” but more a “this until that”
IF you have the resources i would advise to run it locally because of privacy, protecting your data against leaks and generally having more options to fine-tuning (+ its better for the environment)
BUT they are slower and/or have less complexity
Bullet point advice:
please take care of yourself and stay connected to the people around you
Edit: AI is (and was always) like cheating in tests (with notes from others), it will bring results but it wont bring the person further to their goal, maybe even hinder it but the person will definitely be happy to score a few points higher the first few tests
I agree with what you said. The only thing I want to point out is with your statement:
Running models locally doesn’t necessarily mean that it’s better than the environment. Usually the hardware at cloud data centers is far more efficient at running intense processes like LLMs than your average home setup.
You would have to factor in whether your electricity provider is using green energy (or if you have solar) or not. And then you would also have to factor in whether you’re choosing to use a green data center (or a company that uses sustainable data centers) to run the model.
That being said, (in line with what you stated before) given the sensitive nature of the conversations this individual will be having with the LLM, a locally run option (or at least renting out a server from a green data center) is definitely the recommended option.
About the enviroment points you are making: i was more thinking that a small model gets used on already existing hardware (which is the friendly part) and somehow overlooked the efficiency of having a big block for one purpose
So youre right