Home Secretary Amber Rudd told the BBC that she would not rule out forcing technology companies to use it by law.
Rudd visited the US to meet tech companies to discuss the idea, as well as other efforts to tackle extremism.
The tool was made as a way to demonstrate that the government's demand for a clampdown on extremist activity was not unreasonable, Rudd said.
"It's a very convincing example of the fact that you can have the information you need to make sure this material doesn't go online in the first place," she told the BBC.
Thousands of hours of content posted by the Islamic State (IS) terror group was run past the tool, in order to "train" it to automatically spot extremist material.
The government provided 600,000 pounds ($832,000) of public funds towards the creation of the tool by an artificial intelligence company based in London.
According to ASI Data Science, the software is capable of detecting 94 per cent of IS's online activity, with an accuracy of 99.995 per cent.
The Global Internet Forum to Counter Terrorism, launched last year, brings together several governments including the US and UK, and major internet firms like Facebook, Google, Twitter and others.
However, the bigger challenge is predicting which parts of the internet the terrorists will use next.
The Home Office estimates that between July and the end of 2017, extremist material appeared in almost 150 web services that had not been used for such propaganda before.
(This story has not been edited by Social News XYZ staff and is auto-generated from a syndicated feed.)
This website uses cookies.