Chat Control: Mass Surveillance, AI Policing, and the End of Privacy Rights

What is Chat Control?

Chat Control (or e-privacy derogation) is the latest in a long list of government efforts to increase surveillance and gain access to private data of citizens.

However, as far as “democratic” governments are concerned, it’s unprecedented in terms of scale and threat to fundamental human rights. It is a giant and dangerous step closer to having an integrated global network of mass surveillance, tracking, categorization, and complete control over people’s lives.

The stated purpose of the Chat Control regulation is “combating child sexual abuse online”. It will require all email, messaging and chat service providers to automatically search all messages and correspondence of every EU citizen for “possible” child pornography or child grooming content – including private messages secured using end-to-end encryption. The regulation will allow providers to use error-prone artificial intelligence (AI) technology to search user communications for possibly illegal content. If the search algorithms flag a message for “child sexual abuse material” (CSAM), the users will be reported and their communications will be disclosed to private organizations and law enforcement – however, the users concerned are not notified when this happens. Chat Control will have a devastating impact on the human rights of millions of citizens, particularly anyone who is falsely flagged and reported by this automated process.2

Despite a survey that showed 72% of Europeans oppose indiscriminate screening of their private messages, on 26 May 2021 the Chatcontrol regulation was approved by European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE).3

Patrick Breyer (member of the Pirate Party in Germany and the Greens/European Free Alliance Group in the European Parliament) criticises the decision:

“This regulation means all of our private e-mails and messages will be subjected to privatised real-time mass surveillance using error-prone incrimination machines inflicting devastating collateral damage on users, children and victims alike. Countless innocent citizens will come under false suspicion of having committed a crime, under-agers will see self-generated nudes (sexting) fall in wrong hands, abuse victims will lose safe channels for counselling. This regulation sets a terrible precedent and is a result of a campaign of disinformation and moral blackmailing from foreign actors. Unleashing such denunciation machines is ineffective, illegal and irresponsible. It’s using totalitarian methods in a democracy.”4

Why Chat Control is a worldwide problem

Even though Chat Control is an EU regulation, it is threat to all democratic societies (and even less-than-democratic societies). A mass surveillance system, with AI police monitoring every private conversation of every citizen, goes against the fundamental freedoms and right to privacy democratic societies often take for granted. Such a drastic change will have a life-changing impact on hundreds of millions of people, and no doubt every citizen worldwide will feel the effects.

In fact, this is already affecting the U.S in several ways. Many of the largest service providers impacted by the Chat Control regulation are U.S. based (Google, Microsoft, Facebook, etc.), and whatever changes they make to “comply” with Chat Control will likely affect users in the U.S. and elsewhere. Importantly, U.S. lawmakers have been trying for years to pass legislation to implement some of the privacy-invading techniques that Chat Control will impose – the EARN IT Act and the LAED Act are recent examples. Bills for EARN IT and LAED haven’t been passed, but a third example, SESTA/FOSTA, did pass and became law.

SESTA/FOSTA has some commonalities with Chat Control. Riana Pfefferkorn, Research Scholar at the Stanford Internet Observatory blog, explains the problems with SESTA/FOSTA: “SESTA/FOSTA pierces providers’ immunity from civil and state-law claims about sex trafficking. Just as pretty much everybody predicted, SESTA/FOSTA has turned out to endanger sex workers instead of protecting them , and is currently being challenged as unconstitutional in federal court.”10 Chat Control will almost certainly be challenged in court, and one of the most important arguments against Chat Control is it will end up having the opposite effect of its intended purpose. Millions of children and teenagers will be at risk of having their explicit images falling into the wrong hands and being added to the already huge number of images in circulation, creating new victims in the process – and like SESTA/FOSTA – Chat Control is doomed to fail and put the very people it’s supposed to protect at greater risk.

In recent years, several democratic countries have passed more restrictive laws that are progressively weakening free speech and privacy rights of internet users, setting the stage for even worse threats to human rights and the nightmare of Chat Control. The most notable examples are the UK’s Investigatory Powers Act in 2016, Germany’s Network Enforcement (NetzDG) Act in 2017, and Australia’s Assistance and Access Act in 2018.

Critics of Germany’s NetzDG law warned that it would be abused to take down legitimate content, leading to increased censorship and undermining freedom of expression, which came to pass very quickly. Other critics of Germany’s law warned it might be used to legitimize online censorship models for authoritarian states, which has been proven several times over.

Another recent and important example comes from the EU itself. The EU Copyright Directive, specifically Article 17 (formerly Article 13) – which has also taken inspiration from Germany’s NetzDG law – introduced a liability regime for online platforms. This is a fundamental reworking of how copyright currently works on the internet. Currently, rightsholders are responsible for finding infringements of their rights and sending a “takedown notice” to online platforms to have the content removed, but Article 17 completely reverses this: it relieves rightsholders of the need to check the internet for infringement or send out notices, and it requires the service providers to ensure that none of their users are infringing copyright, period.

Requiring providers to check all content uploaded by their users for “copyright infringement” cannot realistically be accomplished without using some sort of automatic filter. Unfortunately, the AI filters used by platforms today have high error rates, which results in removal of non-infringing content and more censorship.

Automated filtering using AI is the same thing Chat Control will do, except in this case the filters are searching for “copyright infringement”. The scale is slightly smaller than Chat Control, yet still immense, as having to check potentially hundreds of billions of social media posts, online forum posts, videos, and images uploaded by users is no small task.

Even if Chat Control fails, Article 17 is a dangerous step closer to another Chat Control – in fact, it seems to be the perfect “testing ground”. Interestingly, this EFF article refers to it as the “Article 17 experiment”, and calls for “mitigating the dangerous consequences of the Article 17 experiment by focusing on user rights, specifically free speech, and by limiting the use of automated filtering, which is notoriously inaccurate”  – similar to the concerns with Chat Control.

Whenever any reasonably democratic government passes more restrictive laws, other democratic governments tend to follow suit, and often use each other’s laws as a model, as Riana Pfefferkorn, Research Scholar at the Stanford Internet Observator, points out, “The U.S. Senate can point to Australia and the UK as evidence that it’s OK for a democracy to severely restrict people’s ability to communicate privately and secure their data.” – and as far as models for encryption laws go, the U.S. LAED act appears to have taken inspiration from Australia’s Assistance and Access Act, which itself seems to have been inspired by the UK’s Investigatory Powers Act.

As far as models for online censorship, Germany’s NetzDG law has been wildly popular. It has already served as a model for at least 13 countries (including some not-so-democratic ones, as predicted), including Russia, Turkey, India, Malaysia, Phillipines, Venezuela, Singapore, Honduras, Vietnam, Belarus, Kenya, France, UK, and Australia.

Several of these countries (Venezuela, Vietnam, India, Russia, Malaysia, and Kenya) require intermediaries to remove vague categories of content that include “fake news”, “defamation of religions”, “anti-government propaganda” and “hate speech” that can be abused to target political dissent.18

These are just a few examples, but we can already an underlying pattern here: many governments – both “democratic” and “not-so-democratic” – are doing similar things and working towards the same goals:

  • Control over enforcing rapid takedown of “illegal, offensive, harmful, or misleading” content
  • Control over tracing specific messages to specific users for tracking the source of some content
  • Control over service providers and compelling them to provide some method of “lawful access” to encrypted data for a given user or users

Notably, these are all “features” of Chat Control. If the EU can get away Chat Control, other countries are bound to follow, because they are already working toward the same end goals.

Even if we can stop Chat Control, we must remain vigilant and help everyone understand why a mass surveillance AI policing system like Chat Control is so dangerous

Due to its unprecedented restrictions on human rights, particularly privacy rights, the Chat Control regulation may fail in the courts, as many people expect. Whether this happens or not, it’s important to understand why a mass surveillance AI policing system like Chat Control is so dangerous to human rights and the core ideas of freedom and privacy.

Because even if Chat Control fails in court, or we are successful in stopping it, there is a good chance we’ll see another “Chat Control” very soon. Whether it happens in the EU again, or somewhere else, we need to fight it. Whatever reason is used to justify why it’s “necessary” – fighting CSAM, hate speech, misinformation, fake news, copyright infringement, vaccine passports, something else, or all the above – we need to fight it. The trends of so-called democratic governments in recent years proves they are all working at a very rapid pace towards the same end goal for the internet – and it looks suspiciously like Chat Control.

As citizens, we can no longer afford to think of these laws and regulations in terms of our specific countries or regions, we must start thinking about their impact from a global perspective. The internet is a globally connected system, and governments worldwide are more unified than ever in cracking down on citizen’s rights, both online and offline. A new regulation in one country is a threat to the entire world- Germany’s NetzDG law is proof of this, and shows us how quickly governments can take laws from other governments and adapt them to fit their goals. We must stay vigilant.

Chat Control is dangerous because it provides a legal framework for mass surveillance and automated AI policing

The Chat Control regulation gives governments a legal basis for bypassing human rights, and existing laws that protect those rights, which are directly at odds with Chat Control, including:

  • The right to privacy (both online and offline)
  • The right be treated as innocent of a crime until proven guilty
  • The right to be treated fairly in criminal proceedings and the right to a fair trial
  • Rights in terms of having control over our data (i.e., bypasses many rights under GDPR)

Chat Control also puts our data and online security in general at great risk:

  • It increases our risk of becoming a victim of several types of crime
  • It puts our most sensitive data at great risk of being misused or falling into the wrong hands

Chat Control = the end of privacy of digital correspondence

We are used to hearing about how common data collection and tracking practices are, and how they are used by companies like Facebook, Google, Apple, etc. Although these data collection and tracking practices do infringe on our privacy and data rights, these violations and the risks they bring are minimal compared to the risks that Chat Control’s mass surveillance with AI Police will bring, or the life-changing impact it will have on our privacy, data, and other human rights.

It’s important to note that automatic and continuous surveillance of someone’s private messages & activities online is no different than surveillance of all their in-person conversations, or an old school phone tap or “wire tap” to listen in on their conversations. This is a major violation of privacy, and should only be used in extraordinary cases where a suspect is being investigated for committing very serious crimes – which is why democratic societies normally have laws that limit this surveillance to specific, targeted suspects, and a warrant or court order has been obtained for the given suspects.

Mass surveillance is the modern-day equivalent of using phone tap surveillance on every citizen. It goes against the very foundation of a person’s privacy, freedom and individual sense of self. It is the opposite of how so-called democratic societies should work.

Just because we are using digital technology to have more and more of our private conversations online instead of in person does not mean we are obligated to give up our right to privacy and allow ourselves to be treated like criminals under continuous surveillance. Chat Control is a clear violation of our rights.

In Schrems I, the European Court of Human Rights made clear that legal frameworks that grant public authorities access to data on a generalized basis compromise “the essence of the fundamental right to private life.” In other words, any law that compromises the “essence to the right to private life” cannot ever be proportionate or necessary.27

A legal assessment by a former Court of Justice judge concluded that requiring service providers to generally and indiscriminately screen content for CSAM “violated fundamental rights guaranteed by Articles 7, 8, 11 and 16 of the Charter”, and even more so if end-to-end encrypted communications were included in the obligation. The assement further concluded:

“Undoubtedly fighting online child abuse material is of utmost importance. However, the requirement of Article 24(2) of the Charter to ensure that in actions relating to children, the child’s best interests must be a primary consideration does not mean that those interests prevail over all other interests. The rights of the child must be given particular weight, but they cannot completely supersede the rights and freedoms of others.” 28

In addition to violating fundamental rights, and not being “proportionate and necessary”, studies on indiscriminate data retention have indicated such practices have no statistically significant impact on crime or crime clearance:
A study by the European Parliament’s Research Service (EPRS) has shown indiscriminate telecommunications data retention has no statistically significant impact on crime or crime clearance.

A similar data retention program in the U.S. has also been found to have little return on investment in fighting crime, having produced new leads in only two cases. According to a Privacy and Civil Liberties Oversight Board report, the decision to end the program was made “after balancing the program’s relative intelligence value, associated costs, and compliance and data-integrity concerns caused by the unique complexities of using these provider-generated business records for intelligence purposes.”

Chat Control puts us at risk of becoming wrongfully suspected, charged, or convicted of a crime

Currently-used chat control algorithms have very high error rates which send thousands of false reports to police. According to the Swiss Federal Police, about 86% of the reports falsely accuse innocent users.

With such high error rates, it follows that a majority of cases reported to law enforcement will be falsely reported for legal content (rather than illegal content), so a majority of the sensitive data sent to private organizations and law enforcement agencies will likely be sexually explicit but legal content from innocent people. Chat Control creates a condition where more people and more entities will have access to sexually explicit images of children and teenagers, increasing the chances they will fall into the wrong hands and putting them at greater risk of becoming victims – which is counterintuitive to the idea that Chat Control will help protect victims or reduce the volume of images in circulation.

One of the worst parts of the Chat Control regulation is there seems to be very little mention of providing safeguards, or clear requirements on mechanisms for the accused to appeal any reports or accusations against them. Anyone who is accused of or charged with a crime has the right to due process, and anyone who is falsely accused has the right to be cleared of any wrongful accusations, charges, or convictions against them. With Chat Control it seems the process of getting an appeal started may be difficult, since the affected persons are not notified when they are reported to law enforcement. Given that a majority of people who will be affected by this system will be innocent of any crime, they should be able to appeal the report and be cleared of any false accusations or charges.

As you can imagine, being wrongfully accused, charged, or convicted of a crime – especially crimes of this nature – can ruin someone’s life. Law enforcement could search your house or other property (which brings its own set of risks), arrest you, and take you to jail. Being wrongfully accused or arrested can ruin your personal or professional reputation, prevent you from having a job, lead to divorce, public shaming, or you could end up being falsely imprisoned. In some situations, it could even cost you your life.

An incredibly tragic example where all of the above life-shattering possibilities actually happened to many people was reported in this CBC article from 2006, where an international investigation of internet-based child pornography led to accusations against innocent victims of credit card fraud. The investigation began in 1999 after a website called Landslide Productions was caught selling access to child pornography. When police closed it down, they discovered a database containing over 100,000 names and credit card details from around the world.
Unfortunately, some police agencies ended up raiding thousands of homes and offices simply because their owners’ names appeared on the database. In some of these cases, the accused were automatically treated as criminals rather than being given the benefit of the doubt, and some were even convicted with no evidence. In the United Kingdom, almost 40 people who were falsely accused committed suicide, as well as six in Australia and at least one in Canada.

Chat Control is a dangerous expansion of law enforcement, mass surveillance capabilities and potential for abuse

Chat Control is expanding the reach of law enforcement to unprecedented levels: more entities – including AI – and unknown numbers of private organizations will now be involved in performing activities that typically only law enforcement has legal authority and authorization to do. This includes “interrogating suspects” (scanning their communications), arresting them (if an algorithm flags their message), “gathering evidence” (transmitting or storing their data if it’s flagged), and “charging them with a crime” (reporting them to law enforcement) – with no legal authority to investigate crimes, arrest potential criminals, or enforce laws. This is further privatization of law enforcement, which has already reached dangerous levels in many countries – with little or no oversight or accountability for abuse of power.

Indeed, the German Federal Data Protection Commissioner sharply criticized the EU plans:

“Blanket and indiscriminate monitoring of digital communications channels is neither effective nor necessary to track down online child abuse. Sexualized violence against children must be tackled with targeted and more specific measures. Investigative work is the task of law enforcement authorities and must not be outsourced to private operators of messenger services.” 32

Such a system is ripe for abuse. Anyone with access could easily abuse this data to harm people or commit other types of crimes – such as publicly releasing intimate messages or nude photos of someone they want to harass, embarrass, incriminate, threaten, or blackmail. Depending on how the system is set up, it may even be possible to plant false evidence, or switch a guilty person’s data from a true hit/illegal image with an innocent person’s data from a false hit/legal image. As stated previously, more people having access to this data means more risk, especially for children & teenagers, of having their private explicit images falling into the wrong hands.

There are real-life examples of this type of abuse – NSA staff have reportedly circulated nude photos of female and male citizens in the past, and a Google engineer was reported to have stalked at least four underage teens for months before the company was notified of the abuses.

This begs several questions – Who will oversee these private organizations? What sort of accountability will they have? Who will enforce violations? It seems to me the Chat Control regulation is very lacking in specifying meaningful requirements for safeguards in these areas. In some countries, a private organization that performs some function that is normally done by a public authority or agency of the government may be required to follow additional laws or regulations that normally wouldn’t apply the organization, but because they are acting on behalf of a government agency or public authority, they now have to follow. And does this mean they legally qualify as a government entity?

Case in point: The National Center for Missing and Exploited Children (NCMEC):

NCMEC is a non-profit organization in the U.S. that already works with law enforcement and tech companies on finding and reporting CSAM. The NCMEC reviews the reports they receive and distributes them to law enforcement agencies in the U.S., as well as international partners. Given their already well-established and integral role, it’s almost certain they will be one of the “organizations with whom data will be shared” by service providers under the Chat Control regulation.

Despite being a private entity, in 2016 a U.S. federal court held that NCMEC qualified legally as a government entity because it performed a number of essential government functions.38

This legal case has important implications, because it highlights the clash between people’s rights and the reach of law enforcement and other government entities regarding warrantless or private searches – and the legal complexities that arise when a private search is done by a private entity, vs a government entity, vs a private entity acting on behalf of a government entity. Legal challenges and changes to existing laws (in the U.S. or elsewhere) may impact NCMEC and other private entities that work with law enforcement on child CSAM reporting, as well as the tech companies who work with them. Indeed, tech companies who already work with these entities are concerned they may also be viewed as government actors, not private entities, which could subject them to new legal requirements and court challenges when they police their own sites.

I’m not a legal expert so I won’t speculate further on this. However, given the importance of protecting people’s rights regarding unlawful searches, and protecting the chain of custody regarding evidence in a criminal investigation, further evaluation and input from people who have legal expertise in these areas is warranted.

Chat Control creates dangerous risks to our most personal data and our online security and safety

It’s obvious that sexting or intimate messages or nude photos is very personal sensitive data. In fact, much of the data that would be flagged by the scanning process could be used to identify an individual person, especially images where faces are visible. This category of data, called “biometric data”, requires the highest levels of protection from unauthorized access. Other examples of biometric data include fingerprints, retina scans, and DNA.

If the algorithms flag any images where faces are visible (regardless of whether explicit or non-explicit) they can be used to identify a specific person. This data could be misused for legal but unauthorized purposes, (e.g., storing facial recognition data for other legal uses, but without consent). Or it could be used for any number of illegal reasons, both online and offline – for example, it may be combined with other personal data to target individuals from certain groups for discrimination, or to identify and target a specific person.

Giving so many people and unknown entities access to this data is creating many more opportunities for unauthorized access and an unacceptable amount of risk of this data being misused, stolen, sold, or publicly released.

In addition to being at risk when it’s stored, biometric data is at risk while it’s being transmitted from one person to another – which becomes even more of a risk with Chat Control.

Chat Control requires all communications traffic to be searched in real time for illegal content, but in order to search private communications secured with encryption, the encryption has to be broken.

Encryption is our most powerful weapon against data being stolen and is essential for many legitimate things we do online. It is required to keep our sensitive data safe, such as bank account & credit card info when we are banking or buying something online. Doctors, lawyers, and other professionals having conversations with patients or clients need encryption in order to maintain confidentiality. Encryption can literally be a matter of life or death for anyone who needs to maintain anonymity online and keep their conversations confidential – for example, independent journalists, whistleblowers, activists – or victims of domestic violence, rape, stalking, or child abuse who are trying to escape their attackers or seeking safety and support.

Alexander Hanff (Victim of Child Abuse and Privacy Activist) has criticised the plans [for the Chat Control Regulation] because indiscriminate screening would deprive victims of channels for confidential counselling, therapy and help:

“I didn’t have confidential communications tools when I was raped; all my communications were monitored by my abusers – there was nothing I could do, there was no confidence. […] I can’t help but wonder how much different my life would have been had I had access to these modern technologies. [The Chat Control Regulation] will drive abuse underground making it far more difficult to detect; it will inhibit support groups from being able to help abuse victims – IT WILL DESTROY LIVES.” 40

Despite suggestions that gaining access to encrypted data “might not require” breaking the encryption, decrypting the contents of encrypted data is literally breaking the encryption.

Likewise, once a method exists to break a type of encryption, it cannot be limited to “only certain people” or “only certain messages” with any certainty – it’s essentially a master key that can be used to unlock anyone’s messages that have been locked with that key. This type of access is an encryption backdoor (sometimes called a universal key, or having “exceptional access”). Encryption backdoors are a major threat to everyone’s security when they fall into the wrong hands or they are discovered and exploited by the “bad guys”, and it’s a matter of when – not if – those things will happen. Other approaches, such as “Client-side scanning“, “Secure enclave”, etc. are sometimes mentioned as a “more reasonable” intervention for end-to-end encryption – which is misleading – because they’re just different ways of breaking encryption and they are still backdoors (so your privacy and security are still at risk).

Is it possible for Chat Control to work without putting users at undue risk? (Hint: the answer is no)

A very concerning revelation about the Chat Control regulation is that people’s expectations and decision-making that led up to it happening in the first place appears to have been influenced by dangerously faulty logic regarding the risks and benefits of automated content scanning and AI technologies that Chat Control is based on. This information came from a draft European Commission report called “Technical solutions to detect child sexual abuse in end-to-end encrypted communications”, which was leaked in September 2020.

In response to the leaked report, the Global Encryption Coalition warns “Insecure communications make users more vulnerable to the crimes we are trying to prevent together. These requirements would force service providers to undermine the security of their encrypted end-to-end services, putting at risk the security of billions of people who depend on these services every day. Put simply, there is no way to break encryption without making everyone, including children, more vulnerable.”

The Global Encryption Coalition also issued their own report, in response to the leaked report, titled “Breaking encryption myths“. In the report, the group discusses their analysis and findings on “what the European Commission’s leaked report got wrong about online security”. Essentially, all of the solutions proposed in the leaked report would require breaking the encryption with a form of backdoor access, meaning each solution would put all users, including children, at far greater risk of harm.

Some highlights from the Global Encryption Coalition’s report:

“Weakening security to detect prohibited material is irresponsible. The report doesn’t just propose flawed methods for confirming whether material is actually prohibited. It suggests risky approaches for spotting that content in the first place. Every client-side scanning approach mentioned in the report opens up risk for users. And that’s without including those potentially dangerous manual review steps…”.

A particularly concerning finding about risk assessments: “The leaked report, however, makes a misleading risk analysis of the security of these systems, often labelling them as “medium” or “low”. These risk assessments are underestimated as there are ample examples of these complex systems being compromised.”

“Breaking end-to-end encryption to curb objectionable content online is like trying to solve one problem by creating 1,000 more. Insecure communications make users more vulnerable to the crimes we collectively are trying to prevent.”

[In conclusion], “EU policymakers and lawmakers need to understand the real impact of content moderation methods if they are to make sound decisions to keep citizens safe online. This leaked report fails to outline the serious risks of requiring communications service providers to detect prohibited content. These requirements would force service providers to undermine the security of their end-to-end encrypted services, jeopardizing the safety of the billions of people who rely on them each day. Put simply, there is no way to break encryption without making everyone, including children, more vulnerable.” 48

We cannot afford to surrender our most fundamental human rights to a global surveillance state

Hopefully I’ve convinced you that Chat Control or any other form of mass surveillance is a monstrous threat to humanity and should never be allowed happen. Especially mass surveillance with AI police that listens in on every conversation we have and searches everything we do to look for evidence of a crime – particularly crimes as serious as child sexual abuse. (I believe we should also ban existing mass surveillance systems, but I’ll save that discussion for another day).

What can we do to stop Chat Control?

Take action!

Even if you’re not an EU citizen, you can help spread awareness in your country! It’s important that everyone is educated on the dangers of mass surveillance, encryption backdoors, and AI policing.

Share information!

Patrick Breyer (Member of the European Parliament) has many great resources on this website – the following items in this list are taken from there:

Playlist of Chat Control informational videos

Talk about it!

Inform others about the dangers of Chat Control. Find and share Tweet templates, pics and videos
Of course, you can also create your own images and videos.

Raise awareness on social media:

Generate attention on social media! You can use this template tweet . Use the hashtags #chatcontrol and #secrecyofcorrespondence

Contact your MEPS:

Reach out to your representatives in Parliament! With one click on the name of the competent representative, you can send a pre-worded message to the negotiators in the leading LIBE committee of the European Parliament. However, experience has shown that individually worded messages are more effective.

Contact your government:

Contact your government’s permanent representation. EU governments are negotiating with the European Parliament.

Generate media attention:

Generate media attention! So far very few media have covered the messaging and Chat Control plans of the EU. Get in touch with newspapers and ask them to cover the subject – online and offline.

Ask your e-mail, messaging and chat service providers!

Avoid Gmail, Facebook Messenger, outlook.com and the chat function of X-Box, where indiscriminate Chat Control is already taking place. Ask your email, messaging and chat providers if they generally monitor private messages for suspicious content, or if they plan to do so.

Additional Resources:

More Chat Control resources are available including additional information and arguments, documents on the legislative procedure, critical commentary, and further reading.

References

  1. Chat Control Trilogue Agreement:
    https://www.patrick-breyer.de/wp-content/uploads/2021/05/202105_Chatcontrol_Trilogue_Agreement.pdf
  2. Messaging and Chat Control:
    https://www.patrick-breyer.de/en/posts/message-screening/?lang=en
  3. Poll: 72% of citizens oppose EU plans to search all private messages for allegedly illegal material and report to the police:
    https://www.patrick-breyer.de/en/poll-72-of-citizens-oppose-eu-plans-to-search-all-private-messages-for-allegedly-illegal-material-and-report-to-the-police
  4. Chat control: Home Affairs Committee approves regulation on indiscriminate searching of private communications:
    https://www.patrick-breyer.de/en/chat-control-home-affairs-committee-approves-regulation-on-indiscriminate-searching-of-private-communications/
  5. The EARN IT Act Violates the Constitution:
    https://www.eff.org/deeplinks/2020/03/earn-it-act-violates-constitution
  6. There’s Now an Even Worse Anti-Encryption Bill Than EARN IT. That Doesn’t Make the EARN IT Bill OK.:
    http://cyberlaw.stanford.edu/blog/2020/06/there%E2%80%99s-now-even-worse-anti-encryption-bill-earn-it-doesn%E2%80%99t-make-earn-it-bill-ok
  7. Stop SESTA and FOSTA:
    https://stopsesta.org
  8. Erased– The Impact of FOSTA-SESTA and the Removal of Backpage:
    https://hackinghustling.org/erased-the-impact-of-fosta-sesta-2020/
  9. Woodhull Freedom Foundation et al. v. United States:
    https://www.eff.org/cases/woodhull-freedom-foundation-et-al-v-united-states
  10. The EARN IT Act: How to Ban End-to-End Encryption Without Actually Banning:
    http://cyberlaw.stanford.edu/blog/2020/01/earn-it-act-how-ban-end-end-encryption-without-actually-banning-it
  11. UK’s Investigatory Powers Act:
    https://www.theregister.com/2016/11/30/investigatory_powers_act_backdoors/
  12. Germany’s Network Enforcement (NetzDG) law:
    https://cdt.org/insights/overview-of-the-netzdg-network-enforcement-law/
  13. Australia’s Assistance and Access Act:
    https://www.zdnet.com/article/whats-actually-in-australias-encryption-laws-everything-you-need-to-know/
  14. RSF fears censorship resulting from German law on online hate content:
    https://rsf.org/en/news/rsf-fears-censorship-resulting-german-law-online-hate-content
  15. Overview of the NetzDG Network Enforcement Law:
    https://cdt.org/insights/overview-of-the-netzdg-network-enforcement-law/
  16. Germany’s Speech Laws Continue To Be A Raging Dumpster Fire Of Censorial Stupidity:
    https://www.techdirt.com/articles/20180217/19141939260/germanys-speech-laws-continue-to-be-raging-dumpster-fire-censorial-stupidity.shtml
  17. Facebook-Gesetz: Schießt Maas übers Ziel hinaus? (Facebook law: does Maas overshoot its mark?):
    https://www.augsburger-allgemeine.de/politik/Soziale-Netzwerke-Facebook-Gesetz-Schiesst-Maas-uebers-Ziel-hinaus-id41792751.html
  18. Analysis: The Digital Berlin Wall: How Germany (Accidentally) Created a Prototype for Global Online Censorship:
    http://justitia-int.org/en/the-digital-berlin-wall-how-germany-created-a-prototype-for-global-online-censorship/
  19. Guidance on Article 17 of Directive 2019/790 on Copyright in the Digital Single Market:
    https://digital-strategy.ec.europa.eu/en/library/guidance-article-17-directive-2019790-copyright-digital-single-market
  20. The European Copyright Directive: What Is It, and Why Has It Drawn More Controversy Than Any Other Directive In EU History?:
    https://www.eff.org/deeplinks/2019/03/european-copyright-directive-what-it-and-why-has-it-drawn-more-controversy-any
  21. The EU Commission’s Refuses to Let Go of Filters:
    https://www.eff.org/deeplinks/2021/06/eu-commissions-guidance-article-17-did-not-let-go-filters
  22. ) EFF to EU Commission on Article 17: Prioritize Users’ Rights, Let Go of Filters:
    https://www.eff.org/deeplinks/2020/09/eff-eu-commission-article-17-prioritize-users-rights-let-go-filters
  23. White noise video on YouTube hit by five copyright claims:
    https://www.bbc.com/news/technology-42580523
  24. The future is here today: you can’t play Bach on Facebook because Sony says they own his composition: https://boingboing.net/2018/09/05/mozart-bach-sorta-mach.html
  25. Turkey’s New Internet Law Is the Worst Version of Germany’s NetzDG Yet:
    https://www.eff.org/deeplinks/2020/07/turkeys-new-internet-law-worst-version-germanys-netzdg-yet
  26. European Court of Human Rights Schrems I (case C‑362/14):
    https://curia.europa.eu/juris/document/document.jsf?text=&docid=169195&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=143358
  27. When Law Enforcement Wants Your Private Communications, What Legal Safeguards Are in Place in Latin America and Spain?:
    https://www.eff.org/deeplinks/2021/02/when-law-enforcement-wants-your-private-communications-what-legal-safeguards-are
  28. Legal opinion commissioned by MEP Patrick Breyer, The Greens/EFAGroup in the European Parliament (Hamburg, March2021):
    https://www.patrick-breyer.de/wp-content/uploads/2021/03/Legal-Opinion-Screening-for-child-pornography-2021-03-04.pdf
  29. General data retention / effects on crime:
    https://www.patrick-breyer.de/wp-content/uploads/2020/10/EPRS_103906-_General_data_retention___effects_on_crime_FINAL.docx
  30. Study: Data retention has no impact on crime:
    https://www.patrick-breyer.de/en/study-data-retention-has-no-impact-on-crime/
  31. Privacy and Civil Liberties Oversight Board Report on the NSA’s Call Detail Records Program under the USA Freedom Act:
    https://documents.pclob.gov/prod/Documents/Projects/3/CDR%20Fact%20sheet%20FINAL.pdf
  32. Mass screening of electronic mail: Facebook suspends controversial ‘incrimination machine’: https://www.patrick-breyer.de/en/mass-screening-of-electronic-mail-facebook-suspends-controversial-incrimination-machine/
  33. EU will verdachtsloe Überwachung privater Chats erlauben (EU wants to allow suspicious surveillance of private chats):
    https://www.tagesanzeiger.ch/bruessel-erlaubt-ueberwachung-privater-mails-391994232965
  34. Global child porn probe led to false accusations:
    https://www.cbc.ca/news/world/global-child-porn-probe-led-to-false-accusations-1.571698
  35. Racy Photos Were Often Shared at N.S.A., Snowden Says:
    https://www.nytimes.com/2014/07/21/us/politics/edward-snowden-at-nsa-sexually-explicit-photos-often-shared.html?_r=0
  36. GCreep: Google Engineer Stalked Teens, Spied on Chats (Updated):
    https://gawker.com/5637234/gcreep-google-engineer-stalked-teens-spied-on-chats
  37. National Center for Missing and Exploited Children (NCMEC):
    https://www.missingkids.org
  38. Court Says Child Porn Clearinghouse Acts As A Government Entity, Cannot Perform ‘Private Searches’:
    https://www.techdirt.com/articles/20160809/07551035194/court-says-child-porn-clearinghouse-acts-as-government-entity-cannot-perform-private-searches.shtml
  39. The Internet Is Overrun With Images of Child Sexual Abuse. What Went Wrong?:
    https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html
  40. Alexander Hanff (Victim of Child Abuse and Privacy Activist):
    “EU Parliament are about to pass a derogation which will result in the total surveillance of over 500M Europeans”:
    https://www.linkedin.com/pulse/eu-parliament-pass-derogation-which-result-total-over-alexander-hanff/?published=t
  41. Former ECJ judge: EU plans for indiscriminate screening of private messages („chat control“) violate fundamental rights:
    https://www.patrick-breyer.de/en/former-ecj-judge-eu-plans-for-indiscriminate-screening-of-private-messages-chat-control-violate-fundamental-rights/
  42. There is No Middle Ground on Encryption:
    https://www.eff.org/deeplinks/2018/05/there-no-middle-ground-encryption
  43. Why Adding Client-Side Scanning Breaks End-To-End Encryption:
    https://www.eff.org/deeplinks/2019/11/why-adding-client-side-scanning-breaks-end-end-encryption
  44. Don’t Let Encrypted Messaging Become a Hollow Promise:
    https://www.eff.org/deeplinks/2019/07/dont-let-encrypted-messaging-become-hollow-promise
  45. Draft European Commission report: “Technical solutions to detect child sexual abuse in end-to-end encrypted communications” (leaked in September 2020):
    https://www.politico.eu/wp-content/uploads/2020/09/SKM_C45820090717470-1_new.pdf
  46. Global Encryption Coalition:
    https://www.globalencryption.org
  47. Mass surveillance to protect children? Opposition to new EU private messaging upload filter plans:
    https://www.patrick-breyer.de/en/mass-surveillance-to-protect-children-opposition-to-new-eu-private-messaging-upload-filter-plans/
  48. Breaking encryption myths (what the European Commission’s leaked report got wrong about online security):
    https://www.globalencryption.org/2020/11/breaking-encryption-myths/

 

 

Leave a Reply

Your email address will not be published.