I think Anubis is really focused on scraper-bots feeding AI models, rather than posting bots. It’s based on requests to non-standard endpoints in your own app, which you specify for Anubis in a couple places (e.g. leaving out of /robots.txt or /.well-known).
If you’re using e.g. a python bot that uses headless chromium executing JS to post stuff, you’re probably going to code in known-good endpoints for comments and posts, rather than hitting random ones like a scraper bot would.
Anubis is good for stopping the n-request-per-second spamming of scrapers, but not so much for just blocking non-human bots that post at normal rates.
My last employer was a Fortune 50, and we did automation detection through behavioral mapping, like posting locations, times, and even word patterns (a very cool experimental project that I got to work on, which used a database of normalized English word frequency to detect bots based on language that was too-similar across users, or even too “perfect”, though this was only used as an indicator and never considered definitive). It is extremely difficult to detect human-impersonating bots based on raw network traffic alone.
So this bit here:
seems like the easiest way to circumvent this law, with the least effort. Just hide connections for users in VA, and now you’re not a social media site. You could even give them their own domain like facebookva.com that only they route through, in case the VA govt try to claim all users on the site have to work the same way. Tie them up for years.