Her plans to tighten regulation on the internet to combat extremism have been branded “intellectually lazy” amid claims they fail to fully address the problem.
May accused big internet companies of giving terrorist ideology “the safe space it needs to breed” online, in the latest of a series of attacks on tech firms by senior Conservatives.
Her calls follow comments made to the G7 last month that more emphasis needs to be placed on removing “extremist material” from the internet by the world’s major industrialised nations.
Digital campaigners the Open Rights Group said it was disappointing the Prime Minister had focused on regulation of the internet and encryption in the aftermath of the London Bridge attack.
The group said: “This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.
“But we should not be distracted: the internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused.
“While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming.”
Professor Peter Neumann, director of the International Centre For The Study Of Radicalisation at King’s College London, was also critical of May’s speech.
He wrote on Twitter: “Big social media platforms have cracked down on jihadist accounts, with result that most jihadists are now using end-to-end encrypted messenger platforms e.g. Telegram.
“This has not solved the problem, just made it different.
“Moreover, few people are radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy.
“In other words, May’s statement may have sounded strong but contained very little that is actionable, different, or new.”
The Tory manifesto for the General Election called for a much tougher approach to regulation on the internet.
It outlined measures to push internet companies further on their commitment to identify and remove terrorist propaganda, and stop terrorists communicating online.
Simon Milner, director of policy at Facebook, said the platform wanted to be “a hostile environment for terrorists” and would continue to work with international partners to tackle the problem.
He told the BBC: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it – and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement.”
A Google spokesman told ITV: “We are committed to working in partnership with the Government and NGOs to tackle these challenging and complex problems, and share the Government’s commitment to ensuring terrorists do not have a voice online.
“We are already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area.
“We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges.”
Last year Facebook, along with other internet giants like Google, started using automation to remove extremist content from their sites.