The meme is talking about a common probability error that surveys have shown even doctors are prone to making.
Why you’re probably ok:
The rarity of the disease far exceeds the error rate of the positive test. Meaning, the disease occurs in 1 out of a million people, so if you are tested at random and show positive, you only have a 1 out of 30,000 chance (the 3% false-positive rate) of being the the 1 person who truly has the disease.


In the case of trying to minimize false positives, you want the specificity to be high, not necessarily the sensitivity, which is associated with false negatives.
And 97% specificity with a very low pretest probability still results in a low probability for disease, which is why screening for so many diseases is difficult, even if diagnosing them can be easy if there are clinical signs and symptoms in addition the the test. The clinical background can increase the pretest probability significantly, allowing the test to do its job.
A video about pretest probability from Dr. Rohin Francis whose YouTube videos are very informative in general.
Another very relevant video from 3Blue1Brown about the problem.
Yes, understood, ideally you would have two tests, one with high sensitivity to give some confidence that the disease is there, following by the high specificity test to compound the probability and rule out the false positive. Usually most tests have a trade off between specificity and sensitivity so two tests are needed.
Edit:
Watched the two videos, I love both these YouTubers but haven’t seen either video before. The calculating of the Bayes factor as an update to the prior odds was very interesting, helped increase my understanding, thank you.