- cross-posted to:
- privacy@lemmy.ml
I have no remarks, just really amused with your writing in your repo.
Going to build a Docker and self host this shit you made and enjoy your hard work.
Thank you for this!
deleted by creator
Same sentiment. Tonight it run on my systems XD.
deleted by creator
This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?
deleted by creator
This is so cool to read about, thx for doing what you and pls keep doing it! We need high quality and trustworthy information now more than ever I think. Damn nzs spewing their propaganda everywhere and radicalising the vulnerable. Thanks!
abilterated one
Please elaborate, that alone piqued my curiosity. Pardon me if I couldve searched
deleted by creator
Thank you again for your explainations. After being washed up with everything AI, I’m genuinely excited to set this up. I know what I’m doing today! I will surely be back
deleted by creator
Removed by mod
I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.
I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.
deleted by creator
How does having a key solve anything? Its not that the source doesn’t exist, it’s that the source says something different to the LLM’s interpretation of it.
deleted by creator
The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.
Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting
If it does that more than twice, straight in the bin. I have zero chill any more.
That’s… not how any of this works…
deleted by creator
Voodoo is not magic btw, it was sullied by colonists
deleted by creator
I think this was was done by France, not better though
deleted by creator
As someone of Haitian descent, no; the French get hardly enough blame, as it is.
Always
wym?
Holy shit I’m glad to be on the autistic side of the internet.
Thank you for proving that fucking JSON text files are all you need and not “just a couple billion more parameters bro”
Awesome work, all the kudos.
deleted by creator
Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.
deleted by creator
I understand, no idea on how to do it. I heard about SWE‑Bench‑Lite that seems to focus on real-world usage. Maybe try to contact “AI Explained” on YT, he’s the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community. Of course, I totally get that you might not want to do any of that. Thank you for your work!
Super interesting build
And if programming doesn’t pan out please start writing for a magazine, love your style (or was this written with your AI?)
deleted by creator
meat popsicle
( ͡° ͜ʖ ͡°)
Anyway, the other person is right. Your writing style is great !
I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.
Anyway version 2, this Is a very cool idea ! I cannot wait to either :
- incorporate it to my workflows
- let it sit in a tab to never be touched ever again
- tgeoryceaft, do tests and request features so much as to burnout
Last but not least, thank you for not using github as your primary repo
deleted by creator
Don’t spam my Github inbox plz
I can spam your codeberg’s then ? :)
About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC… …sorry, sorry…
Understandable, have a great day.
deleted by creator
Based AF. Can anyone more knowledgeable explain how it works? I am not able to understand.
deleted by creator
As I understand it, it corrects the output of LLMs. If so, how does it actually work?
deleted by creator
That is much clearer. Thank you for making this. It actually makes LLMs useful with much lesser downsides.
deleted by creator
Will do.
This seems astonishingly more useful than the current paradigm, this is genuinely incredible!
I mean, fellow Autist here, so I guess I am also… biased towards… facts…
But anyway, … I am currently uh, running on Bazzite.
I have been using Alpaca so far, and have been successfully running Qwen3 8B through it… your system would address a lot of problems I have had to figurr out my own workarounds for.
I am guessing this is not available as a flatpak, lol.
I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!
deleted by creator
Oh I entirely believe you.
Hell hath no wrath like an annoyed high functioning autist.
I’ve … had my own 6 month black out periods where I came up with something extremely comprehensive and ‘neat’ before.
Seriously, bootstrapping all this is incredibly impressive.
I would… hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.
… but yeah, it is… yet another immensely ironic aspect of being autistic that we’ve been treated and maligned as robots our whole lives, and then when the normies think they’ve actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which… appalls the normies when they have to look into a hyperpowered mirror of themselves.
And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that… most likely, will not be widely recognized.
… fucking normies man.
deleted by creator
No promises, but if I end up running this it will be by putting it in a container. If I do, then I’ll put a PR on Codeberg with a Docker Compose file (compatible with Podman on Bazzite).
deleted by creator
Huzzah!
I want to believe you, but that would mean you solved hallucination.
Either:
A) you’re lying
B) you’re wrong
C) KB is very small
deleted by creator
So… Rag with extra steps and rag summarization? What about facts that are not rag retrieval?
deleted by creator
The system summarizes and hashes docs. The model can only answer from those summaries in that mode
Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?
deleted by creator
Huh? That is the literal opposite of what I said. Like, diametrically opposite.
The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.
No, that’s exactly what you wrote.
Now, with this change
SUMM -> human reviews
That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.
Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.
Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.
Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.
deleted by creator
Woof, after reading your “contributions” here, are you this fucking insufferable IRL or do you keep it behind a keyboard?
Goddamn. I’m assuming you work in tech in some capacity? Shout-out to anyone unlucky enough to white-knuckle through a workday with you, avoiding an HR incident would be a legitimate challenge, holy fuck.
Hallucination isn’t nearly as big a problem as it used to be. Newer models aren’t perfect but they’re better.
The problem addressed by this isn’t hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response. That’s easy and any big and small company could do it, big companies just like the bullshit
deleted by creator
A very tailored to llms strengths benchmark calls you a liar.
https://artificialanalysis.ai/articles/gemini-3-flash-everything-you-need-to-know (A month ago the hallucination rate was ~50-70%)
Buuuuullshit. Asked different models about the ten highest summer transfer scorers and got wildly different answers. They then tried to explain why amd got more wrong numbers.
THIS IS AWESOME!!! I’ve been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.
I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.
On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.
deleted by creator
Understood.
Hallucination is mathematically proven to be unsolvable with LLMs. I don’t deny this may have drastically reduced it, or not, I have no idea.
But hallucinations will just always be there as long as we use LLMs.
deleted by creator
I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.
Thank you for this. I will test it on my local install this weekend.
deleted by creator
Fuck yeah…good job. This is how I would like to see “AI” implemented. Is there some way to attach other data sources? Something like a local hosted wiki?
deleted by creator
I wanna just plug Wikipedia into this and see if it turns an LLM into something useful for the general case.
deleted by creator
Not OP, but random human.
Glad you tried the “YOLO Wikipeida”, and are sharing that fact as it saves the rest of us time. :)
deleted by creator
Yes please
deleted by creator
Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn’t, and actually being serious about addressing its problems and limitations. It’s projects like yours that can demonstrate pathways toward achieving better AI.






