You may try different promting to make sure it only takes information out of your input. Also maybe look into ways to engineer your promt to give the LLM less room for creativity. Maybe make an assistant for the task. Claude is dev able to only use info off of your own input. We use it that way at work to make compliance stuff searchable.
There are still different models best for different tasks though. One of the gemini models has very low hallucination percentage.
You may try different promting to make sure it only takes information out of your input. Also maybe look into ways to engineer your promt to give the LLM less room for creativity. Maybe make an assistant for the task. Claude is dev able to only use info off of your own input. We use it that way at work to make compliance stuff searchable.
There are still different models best for different tasks though. One of the gemini models has very low hallucination percentage.