- Do YOU know if you’ve been filmed? Email richard.percival@mailonline.co.uk
A privacy row broke out today after it emerged that Network Rail has been secretly using Amazon’s AI technology to secretly monitor thousands of rail passengers at major stations across the UK.
Campaigners last night accused the Government-owned company of displaying a ‘contempt for our rights’ by secretly installing AI-powered surveillance at rail hubs across Britain.
It is feared that thousands of people have had their faces recorded with ‘smart’ CCTV cameras to establish their age, gender and emotions at London Waterloo and Euston stations, as well as Manchester Piccadilly, Leeds, Glasgow, Reading, Dawlish , Dawlish Warren and Marsden stations.
The scheme has been taking place for two years, with the data sent to Amazon Rekognition, according to a Freedom of Information request obtained by civil rights group Big Brother Watch.
According to the documents, there are between five and seven AI cameras at each station.
Big Brother Watch last night warned that ‘AI-powered surveillance could put all our privacy at risk’ – adding that Network Rail had shown a ‘contempt for our rights’.

Network Rail started the trials in 2022 with the aim of improving customer service and enhancing passenger safety (file image of London Liverpool Street station)

Cameras placed at ticket barriers at mainline railway stations across the country analyse customers faces (file image)
The Information Commissioner’s Office (ICO) previously warned companies against using the technology.
They also said the technologies are ‘immature’ and ‘they may not work yet, or indeed ever.’
Jake Hurfurt, Head of Research & Investigations at Big Brother Watch, said: ‘Network Rail had no right to deploy discredited emotion recognition technology against unwitting commuters at some of Britain’s biggest stations, and I have submitted a complaint to the Information Commissioner about this trial.
‘It is alarming that as a public body it decided to roll out a large scale trial of Amazon-made AI surveillance in several stations with no public awareness, especially when Network Rail mixed safety tech in with pseudoscientific tools and suggested the data could be given to advertisers.’
‘Technology can have a role to play in making the railways safer, but there needs to be a robust public debate about the necessity and proportionality of tools used.
‘AI-powered surveillance could put all our privacy at risk, especially if misused, and Network Rail’s disregard of those concerns shows a contempt for our rights.’
Carissa Véliz, an associate professor in psychology at the Institute for Ethics in AI at the University of Oxford, told Wired: ‘Systems that do not identify people are better than those that do, but I do worry about a slippery slope.
‘There is a very instinctive drive to expand surveillance. Human beings like seeing more, seeing further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies’.
The ICO’s deputy commissioner Stephen Bonner said: ‘Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever.’
‘While there are opportunities present, the risks are currently greater.
‘At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgments about a person that are inaccurate and lead to discrimination.
‘The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science.
‘As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.

Civil liberties group Big Brother Watch have raised privacy concerns about the Network Rail scheme and have submitted a complaint to the Information Commissioner’s Office (ICO) (file image of Carlisle railway station)

London Euston is one of the stations where the cameras have been placed
‘The ICO will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance, whilst encouraging trust and confidence in how these systems work.’
AI researchers have also warned that using the technology to detect emotions is ‘unreliable’ and should be banned.
In the EU such systems are banned or deemed ‘high risk’ under the Artificial Intelligence Act.
Gregory Butler, chief executive of Purple Transform, which constructed the trial for Network Rail, said that although the trial continued, the part looking at emotions and demographics had been short-lived.
Network Rail has refused to answer questions about the scheme but in a statement, a spokesperson said: ‘We take the security of the rail network extremely seriously and use a range of advanced technologies across our stations to protect passengers, our colleagues, and the railway infrastructure from crime and other threats.
‘When we deploy technology, we work with the police and security services to ensure that we’re taking proportionate action, and we always comply with the relevant legislation regarding the use of surveillance technologies.’
MailOnline has contacted Network Rail for comment.