Facebook could easily block bots from clicking on advertisements but chooses not to, says a new report."Advertisers routinely end up paying for robotic (fraudulent) activity," according to the report by Method Media Intelligence (MMI), which has offices in New York, California, and London.Nor is bot fraud small change. Bot battles: Facebook is packing enough heat to stop bot attacks – but doesn't appear to be using it.Most estimates place the annual cost of ad fraud in the tens of billions of dollars, say the report's authors.To put this into context, the annual cost of insurance fraud in the US is $40 billion, says the FBI.Bots are increasingly good at simulating human activity by clicking on ads, opening web pages and downloading apps.When they do this, advertisers who pay for user engagement end up paying for this fake activity instead.Robot uprisingTypical advertisers often are spending 20% of their budget paying for views from bots, says Shailin Dhar, MMI's director of research.And while the California-based social media company has technology to detect bot activity, it only deploys it at the stage where you register your account. Not, for example, in logging in, viewing content and engaging with advertisements.Bots, by using automation tools like Playwright, Puppeteer and Selenium, can be deployed at enormous scale using cloud computing technologies like Amazon Web Services and Microsoft Azure.Puppeteer, for example, has been downloaded over 100 million times.The problem goes beyond Facebook."We created search ad campaigns on Google AdWords, Bing, and Yahoo and sent our own bot to click on our ads," say the MMI researchers.In each case, they were able to consume hundreds of dollars of budget in minutes with a bot, they add.And even the least sophisticated bots weren't filtered out of the billing from Google, Bing and Yahoo search ads.MMI does not speculate as to why Internet companies don't deploy technology to filter out this bot activity. A cynic, though, might argue they are just as happy to look the other way, and send advertisers the big bills.Platforms often say online advertising fraud is declining. But, in reality, as bots become increasingly sophisticated and easy to deploy at massive scale, the problem is going the other way, say the researchers.Vote Bot in 2020The implications go beyond advertisers' budgets, too.With a US presidential election coming up in November, "with enough money and the right skills, these tools are exactly what would be used to mass manipulate social media", says one user on Hacker News, a community run by California-based accelerator Y Combinator."You could literally create thousands or millions of accounts and add enough entropy [disorder] to make it undetectable," says the user, who adds that it is "sort of scary to think about what is possible with this."Want to know more about security? Check out our dedicated security channel here on Light Reading. Nor is it a hypothetical danger.Recently, "foreign activity groups have stepped up their efforts targeting the 2020 election," says Microsoft in a blog post.A Russian-based cyber group Western analysts call Strontium has recently attacked more than 200 US elections-related organizations including political campaigns, advocacy groups, parties and political consultants, Microsoft adds.In one two-week period in August, this included 6,900 accounts belonging to 28 organizations.And bots don't only speak with a Russian accent.Zirconium, a network associated with China, and Phosphorus, linked to Iran, also are hard at work hacking the US election, too.