It gained't be simply human eyes monitoring the 1000’s of safety cameras on the Paris Olympics. Martin Bureau/AFP by way of Getty Pictures
The 2024 Paris Olympics is drawing the eyes of the world as 1000’s of athletes and assist personnel and a whole bunch of 1000’s of tourists from across the globe converge in France. It’s not simply the eyes of the world that will likely be watching. Synthetic intelligence techniques will likely be watching, too.
Authorities and personal corporations will likely be utilizing superior AI instruments and different surveillance tech to conduct pervasive and protracted surveillance earlier than, throughout and after the Video games. The Olympic world stage and worldwide crowds pose elevated safety dangers so important that in recent times authorities and critics have described the Olympics because the “world’s largest safety operations outdoors of conflict.”
The French authorities, hand in hand with the personal tech sector, has harnessed that authentic want for elevated safety as grounds to deploy technologically superior surveillance and knowledge gathering instruments. Its surveillance plans to fulfill these dangers, together with controversial use of experimental AI video surveillance, are so in depth that the nation needed to change its legal guidelines to make the deliberate surveillance authorized.
The plan goes past new AI video surveillance techniques. In response to information stories, the prime minister’s workplace has negotiated a provisional decree that’s categorised to allow the federal government to considerably ramp up conventional, surreptitious surveillance and data gathering instruments in the course of the Video games. These embody wiretapping; gathering geolocation, communications and pc knowledge; and capturing larger quantities of visible and audio knowledge.

French President Emmanuel Macron critiques surveillance cameras in preparation for the Paris Olympics.
Christophe Petit Tesson/AFP by way of Getty Pictures
I’m a legislation professor and lawyer, and I analysis, train and write about privateness, synthetic intelligence and surveillance. I additionally present authorized and coverage steerage on these topics to legislators and others. Elevated safety dangers can and do require elevated surveillance. This 12 months, France has confronted issues about its Olympic safety capabilities and credible threats round public sporting occasions.
Preventive measures ought to be proportional to the dangers, nonetheless. Globally, critics declare that France is utilizing the Olympics as a surveillance energy seize and that the federal government will use this “distinctive” surveillance justification to normalize society-wide state surveillance.
On the similar time, there are authentic issues about satisfactory and efficient surveillance for safety. Within the U.S., for instance, the nation is asking how the Secret Service’s safety surveillance failed to stop an assassination try on former President Donald Trump on July 13, 2024.
AI-powered mass surveillance
Enabled by newly expanded surveillance legal guidelines, French authorities have been working with AI corporations Videtics, Orange Enterprise, ChapsVision and Wintics to deploy sweeping AI video surveillance. They’ve used the AI surveillance throughout main live shows, sporting occasions and in metro and prepare stations throughout heavy use intervals, together with round a Taylor Swift live performance and the Cannes Movie Competition. French officers mentioned these AI surveillance experiments went properly and the “lights are inexperienced” for future makes use of.
The AI software program in use is usually designed to flag sure occasions like modifications in crowd dimension and motion, deserted objects, the presence or use of weapons, a physique on the bottom, smoke or flames, and sure site visitors violations. The purpose is for the the surveillance techniques to instantly, in actual time, detect occasions like a crowd surging towards a gate or an individual leaving a backpack on a crowded avenue nook and alert safety personnel. Flagging these occasions looks like a logical and smart use of know-how.
However the true privateness and authorized questions circulate from how these techniques operate and are getting used. How a lot and what sorts of knowledge need to be collected and analyzed to flag these occasions? What are the techniques’ coaching knowledge, error charges and proof of bias or inaccuracy? What is completed with the information after it’s collected, and who has entry to it? There’s little in the way in which of transparency to reply these questions. Regardless of safeguards aimed toward stopping using biometric knowledge that may establish individuals, it’s doable the coaching knowledge captures this info and the techniques could possibly be adjusted to make use of it.
By giving these personal corporations entry to 1000’s of video cameras already situated all through France, harnessing and coordinating the surveillance capabilities of rail corporations and transport operators, and permitting using drones with cameras, France is legally allowing and supporting these corporations to check and prepare AI software program on its residents and guests.
Legalized mass surveillance
Each the necessity for and the follow of presidency surveillance on the Olympics is nothing new. Safety and privateness issues on the 2022 Winter Olympics in Beijing have been so excessive that the FBI urged “all athletes” to go away private cellphones at residence and solely use a burner telephone whereas in China due to the acute degree of presidency surveillance.
France, nonetheless, is a member state of the European Union. The EU’s Common Knowledge Safety Regulation is without doubt one of the strongest knowledge privateness legal guidelines on the planet, and the EU’s AI Act is main efforts to control dangerous makes use of of AI applied sciences. As a member of the EU, France should comply with EU legislation.
France has cleared the way in which legally to develop its use of AI in surveillance of public locations.
Getting ready for the Olympics, France in 2023 enacted Regulation No. 2023-380, a package deal of legal guidelines to offer a authorized framework for the 2024 Olympics. It consists of the controversial Article 7, a provision that enables French legislation enforcement and its tech contractors to experiment with clever video surveillance earlier than, throughout and after the 2024 Olympics, and Article 10, which particularly permits using AI software program to assessment video and digicam feeds. These legal guidelines make France the primary EU nation to legalize such a wide-reaching AI-powered surveillance system.
Students, civil society teams and civil liberty advocates have identified that these articles are opposite to the Common Knowledge Safety Regulation and the EU’s efforts to control AI. They argue that Article 7 particularly violates the Common Knowledge Safety Regulation’s provisions defending biometric knowledge.
French officers and tech firm representatives have mentioned that the AI software program can accomplish its targets of figuring out and flagging these particular sorts of occasions with out figuring out individuals or working afoul of the Common Knowledge Safety Regulation’s restrictions round processing of biometric knowledge. However European civil rights organizations have identified that if the aim and performance of the algorithms and AI-driven cameras are to detect particular suspicious occasions in public areas, these techniques will essentially “seize and analyse physiological options and behaviours” of individuals in these areas. These embody physique positions, gait, actions, gestures and look. The critics argue that that is biometric knowledge being captured and processed, and thus France’s legislation violates the Common Knowledge Safety Regulation.
AI-powered safety – at a price
For the French authorities and the AI corporations to date, the AI surveillance has been a mutually useful success. The algorithmic watchers are getting used extra and provides governments and their tech collaborators way more knowledge than people alone may present.
However these AI-enabled surveillance techniques are poorly regulated and topic to little in the way in which of unbiased testing. As soon as the information is collected, the potential for additional knowledge evaluation and privateness invasions is big.

Anne Toomey McKenna is Co-Chair of the Institute for Electrical and Electronics Engineers (IEEE)-USA's Synthetic Intelligence Coverage Committee (AIPC), which entails subject material and education-related interplay with U.S. Senate and Home congressional staffers and the Congressional AI Caucus. McKenna has acquired funding from the Nationwide Safety Company for the event of authorized instructional supplies about cyberlaw and funding from The Nationwide Police Basis along with the U.S. Division of Justice-COPS division for authorized evaluation concerning using drones in home policing.












