Featured

A Critique of Pure Friction: Does More Hassle Mean Additional Safety and Better Regulation?

The recent spate of digital regulation initiatives in Europe have one thing in common: they introduce new procedures that create friction in the system, such as consent forms, ex ante impact and risk assessment, reporting, rights of appeal. Built-in hurdles like these can be found in a lot of existing European Union legislation, such as the general data protection regulation and the platform to business regulation. And the more recent European Commission proposals – the digital services act (DSA), the digital markets act (DMA) and the artificial intelligence act – would go a long way towards embedding the idea that the more fussiness we can bring to the Internet experience, the better the consumer will be protected and the greater the threats we will have successfully evaded.

But when it comes to friction, more is not necessarily better. Too little friction can make the car lose control, but too much friction will make it stop entirely. And detailed case-by-case or ex ante procedures have one other significant drawback: they are largely ineffective at dealing with the massive scale and unpredictable human behaviour that takes place on social media. They can even slow down and hamper efforts to keep consumers genuinely protected in the online sphere or to respond in real time to genuine threats that exist on the Internet.

The ongoing negotiations over the DSA and the DMA – the complex legislative packages that must be agreed by the Council of the European Union, the European Parliament and the European Commission to become law – have become a battle field where the EU’s big legislative guns are trained on each other to see who can add more and more friction to the system. For instance, Council of the European Union negotiators propose to grant a “right-to-appeal” in the DSA to anyone who flags a posted item – which is essentially a message to the platform that the viewer thinks the post is inappropriate and should be removed – if the post was examined but found to be in conformity with community guidelines. In other words, a viewer can flag a post for whatever reason, and, if the service provider does not take down the content, the flagger has a right to challenge the decision. To understand the implications, bear in mind that YouTube received a staggering 74,752,570 flags in the last quarter of 2021 – meaning had the new rule been in effect millions of YouTube users would have received detailed explanations in Q4 of how their comment flag was processed along with a right to launch proceedings to have that flagging decision overturned.

But the majority of flags are little more than low-effort ways of expressing aversion and throwing a bit of mud towards something or someone a viewer dislikes. In fact, when looking at removal statistics, YouTube reports that only 232,737 videos were taken down in Q4 because of flags, i.e., one video for every 300 flagged. And the most flagged person in the world? Justin Bieber. You may or may not like his music or his style, but little of it poses a harm which could justify its removal from social media on community-guideline grounds (at least to this author’s ears and eyes). Incidentally, Tim O’Reilly and others have written insightfully about this. Interactivity, in his view, is based on an “architecture of participation.” Flagging posts or liking comments is good and fun and interesting. But is it really a form of commentary on a par with free speech?

The European Parliament has also been eager to tack on more friction, flexing its legislative muscle with a host of proposed add-ons to the existing proposal. Among the many European Parliament amendments:

  1. Extend the requirements for notifying users when their posted content is “demoted” or made less visible in search results or sharing and not just when it is actually removed as is the practice now.
  2. Require e-commerce platforms to gather and verify data from traders such as contact details, goods certification, trade registry certificates and payment details before allowing those companies to sell over the platform.
  3. Extend the requirement for a full risk assessment by very large platforms from a yearly basis to every time a new service is released.
  4. Require large platforms to ask for GDPR-like consent when they combine data from different services (such as iCloud and Apple Music which are already bundled on many people’s Mac).

These proposals are exactly that: proposals. They will now be discussed and possibly corrected or dismissed. But they reveal an underlying way of thinking by policymakers that remains: the idea that increasing friction in the system will make users safer and the Internet more secure. The requirement to notify users every time content is demoted, for one, would lead to an explosion of notifications regarding a routine function of the Internet – one that is largely data-driven and built to deliver content relevance and meaningful user experiences to the 3.5 billion people who use the Internet. The e-commerce requirements, too, while well intentioned, are an ocean away from the very real problem of counterfeit goods, which takes place mostly in large, container-driven businesses; though the proposal would radically increase the amount of paperwork – and the number of compliance thresholds – on small businesses that use platforms to sell. And the proposal that platforms be required to publish full risk assessments before a new service is rolled out is the most dangerous of all. Today, in the era of agile, iterative development and permanent beta, services are released and updated almost on a daily basis, making the task impossible. And imagine if such a requirement had been in place when the WannaCry Ransomware attack was underway. Sometimes platforms need to act quickly. Sometimes we really want them to.

To be clear, adding friction can be a very good idea in some situations. There are, for example, lengthy, time-eating requirements in place before a person or couple can sign a mortgage or a new drug be widely distributed to the population. But when applied indiscriminately, friction can penalise (by accident or on purpose) the weakest part of society, such as minorities and small businesses. In at least one country, multiple identity checks and forms have been designed and used to disenfranchise voting rights of minorities. In other places, unclear and lengthy importing procedures are a well-known kind of non-tariff barrier which harms big and small businesses alike. The European Commission itself has raised a flag on this practice, noting in its action plan for better implementation and enforcement of single market rules that “SMEs are the first to be penalised by administrative burdens and complexity.”

A good example of the unintended cons of adding too much friction to the system is the general data protection regulation (GDPR). I am not referring to the cumbersome user experience due to pop-up proliferation, which is a deliberate achievement because it raises the awareness of users about how personal data is treated. But when it comes to its economic impact, small businesses and new startups have been excessively penalised by the indirect effects of the regulation as much recent scholarship has shown. Stricter privacy controls reduced competition in advertising markets and increased market concentration as cookie consent rates are smaller for small players. Younger and early-stage ventures are also more affected than established companies. And ad-blockers reduce product and brand discovery. This does not mean that GDPR failed, but it does mean that even a measure considered successful must deal carefully with direct and indirect effects and the consequences are often different from the expectations. In this context, continuously adding new friction is unlikely to work when it is applied too brusquely – or given tasks to perform that it is singularly ill-equipped to deliver. And it can even add to the information and other asymmetries that it is intended to correct. Jura Liaukonyte, professor of applied economics and management at Cornell University, put it succinctly in a recent twitter thread: “Evidence is rapidly accumulating, suggesting that stricter privacy controls exacerbate inequality between large and small businesses.”

What’s more, the European obsession with introducing friction as a form of regulation reflects a fundamental misalignment with the nature of digital services and, more generally, with today’s complex society. Ex ante assessments can work in specific cases but they are not effective tools to resolve the complex, non-linear problems generated by human use of digital technologies at massive scale. As Daphne Keller, director of the platform regulation programme at Stanford Cyber Policy Centre, puts it, such measures “seem built on the hope that with enough rules and procedures in place, governance of messy, organic human behaviour can become systematised, calculable and predictable.” The proliferation of ex ante measures is particularly ironic in this context because the distinguishing feature of digital services is their iterative nature driven by the fundamental fact that human behaviour is difficult to predict outside of a context that is agile and data-driven.

Let me be clear. I don’t advocate a deregulatory, frictionless approach based on free speech and “innovation without permission.” And even less do I endorse the idea that adding friction makes better regulation. In fact, we do not have to choose between the two. We have examples of different approaches in government, bridging the gap between cultures of ex post result-oriented decisions (typical of digital services) and ex ante process-oriented rulings (typical of traditional government).

Recent regulatory trends point to new approaches such as sandboxes and so-called “agile regulation,” a toolkit for effectively regulating fast-moving digital markets through data analytics and iteration. And we find signs of these fast-emerging approaches in the DSA and other recent proposals. For one, the DSA’s strict reporting requirements go in the direction of fostering a more responsive, data-driven regulatory system – one based more on actual than expected behaviour. But the data troves that these new reporting requirements produce will need to be strategically designed to avoid inconsistencies and to make the data easier to understand and act upon. As is, too much performance data – much of it generated through self-reporting and the code-of-conduct approach – is buried across widely dispersed European Commission initiatives or in single-company reports. To be genuinely useful, the metrics should be carefully designed, standardised across different policy instruments possibly at the global level, published in open data formats and made accessible through synthesis reports with interactive visualisation. If that could be achieved, the results could be more widely used and ultimately more effective in regulating a fast-moving phenomenon that is difficult to nail ex ante even under the best of circumstances.

In other words, ex-ante risk assessments, to the extent that they are used, should be targeted, limited in number, carefully designed and consistently followed up through iterative monitoring. Regulators must ask, is the expected impact taking place? And if not, what are the reasons behind that? Otherwise, the rules will be at best little more than cumbersome regulatory requirements and at worst box-ticking exercises that give the illusion of control and allow responsibilities to be evaded.

Last but not least, when it comes to overseeing moderation at such a massive scale, an approach based on notification rights and a case-by-case appeal process seems less effective than a risk-based approach based on well designed and transparent sampling. There is a trade-off between quantity and quality of moderation, not just for the moderators, but also for those like policymakers who intend to regulate this moderation. Providing every user, commenter and flagger with the right to monitor and appeal will not lead to a fairer system and could be easily abused by those most able to play the game.

Regulating digital technologies is one of the profound challenges of this generation. It will have a deep impact on our democracies and way of life. We need to gather data, share knowledge and discuss openly to understand what works – and what doesn’t. And we need to see how that knowledge can be used to generate new ideas and more effective approaches. But it is also important to quickly drop ideas that do not work. The overreliance on friction is one of them.

David Osimo is director of research at the Lisbon Council.

Context is King: How Correct Data Can Lead to False Conclusions

On 12 October 2021, in a now infamous Joe Rogan Podcast anti-vaxxer Alex Berenson said that “the vast majority of people in Britain who died in September were fully vaccinated,” offering this dubious fact to support his view that COVID-19 vaccines were as dangerous as the virus itself. The statement, while formally correct and based on official data from the United Kingdom National Health Service, was highly misleading: it missed the crucial context that the vast majority of the British population was vaccinated so of course the vast majority who died was vaccinated as well. In fact, the mortality rate of the unvaccinated population – seen in the right context – was several times higher than the mortality rate of vaccinated people. In the aftermath, many scientists and fact checkers moved to clarify this. Charts were published to unpack the issues involved and inform the public about the importance of contextual data in drawing the right conclusions. The whole story went down as an episode of misinformation and has now blown into a full-fledged crisis at Spotify Technology s.a., the company hosting and financing the podcast.


Source: Our World in Data

Of course, vaccination is a uniquely important topic, attracting high levels of attention, so the reaction was prompt and effective. But the majority of public-policy issues do not get such scrutiny and attention, and the use of out-of-context statistics is much less likely to be noticed or corrected.

One area where similar leaps to conclusion are taking place is the ongoing debate on the digital services act (DSA), an omnibus piece of European Union legislation making its way through the cumbersome EU decision-making process. Differing versions of the legislation are now before the European Parliament and the Council of the European Union (which represents the EU’s 27 member states) while technical experts negotiate a common text, which, once approved, will become law (the process is called “trilogue,” because the European Commission is part of the negotiation, too). Both versions seek to set new, tougher requirements for online platforms, including granting consumers the right to appeal algorithmic rankings and requiring some companies to disclose potentially sensitive data to non-profit organisations and journalists. And both versions seek to expand European Union powers in a crucial area: tougher reporting requirements and rules for goods being sold online.

But is the problem of illegal and counterfeit goods really as large as the solution proposed? Officials in France – the home of luxury goods makers Hermès International s.a., Kering, L’Oréal s.a. and LVMH Moët Hennessy Louis Vuitton – seem to think so. On 15 October 2021, almost at the same time as the Rogan podcast fiasco, the French ministry of economy, finance and recovery published Conformité des produits vendus en marketplaces, a timely report obviously intended to feed the ongoing DSA debate in Brussels. It stated, boldly, that 60% of the products it had sampled were illegal goods that did not comply with basic health or other product standards – and 32% were downright dangerous. But the context and close analysis of the underlying data showed a different story, one where the problem was perhaps not as systematic as a skewed look at the data would imply. The survey is based on merely 129 products across 10 marketplace and no information is provided on how the sample was selected. Amazon.com Inc., for one, offers 75 million products for sale every day. Is their online market really to be judged based on a highly selective survey of 129 products?

This is not a unique case. The use of “out-of-context” evidence has been a recurrent issue in the debate over product safety, as this author pointed out in Fighting Counterfeits or Counterfeiting Policy? A European Dilemma, a previous post on the Evidence Hub, a Lisbon Council project created to discuss and disseminate the evidence being used in policymaking. For starters, there should be a thick blanket of caution over the headline €121 billion counterfeit market estimate that the European Commission cites in the impact assessment of the digital services act. The €121 billion figure comes from Trends in Trade in Counterfeit and Pirated Goods, an European Union Intellectual Property Office (EUIPO) and Organisation for Economic Co-operation and Development (OECD) report, which took a much more careful approach to the data. It noted that most counterfeit goods seized at the border are watches, clothing and bags, which are brand-sensitive items where the original is worth much more than the cheaper knock off. But the market figure presented is calculated based on the value of the original good and not the estimated cost of the much cheaper counterfeit. This, as the EUIPO puts it bluntly, “may lead to an inflated estimated value” in the overall figure. But nuance like this often gets lost in the policy debate. It can lead not only to wrong conclusions; it can also lead to wrong actions as well.

And there are other examples. One recurrent policy argument is that counterfeit good are a major danger to consumers. To prove this, in a summary of the data compiled in the Counterfeit and Piracy Watch List, a European Commission-led project in which stakeholders are invited to report marketplaces where they suspect counterfeit goods are being sold, the European Commission reported that “97% of reported dangerous counterfeit goods were assessed as posing a serious risk to consumers.” But the data came from a different source – the Qualitative Study on Risks Posed by Counterfeits to Consumers, an EUIPO report. There, the EUIPO stated bluntly that the dataset used for the survey was too small to allow for statistically significant analysis and even labelled the study “qualitative” as a nod to the limited conclusions that should be drawn from a sample so small. But even there, the results told a different story than a superficial glance might reveal. Of  the 15,459 dangerous products then identified in the EU Rapid Alert System – an online portal where countries can share information with other countries about dangerous goods they have identified – only 191 were counterfeit. In other words, only 1% of the total number of dangerous goods turned out to be fake – not the 97% you might have thought if you’d only ever seen the original statement presented out of context.  

Most of these questions would be academic if policy were not due to be made on the basis of these statistics – and their out-of-context citation. To the contrary, rules are being drawn up – sometimes hastily with only limited impact assessments – that would require online platforms to take steps above and beyond the commitments already made in the Product Safety Pledge and the Memorandum of Understanding on the Sale of Counterfeit Goods on the Internet. Among the provisions in the DSA are expanded rules to require intermediaries to verify the accuracy of trader information and listed goods before agreeing to sell them (providing a “best” effort towards that end) and, according to the draft favoured by the European Parliament, to carry out random checks on goods offered for sale to confirm that they are not illegal. As Victoria de Posson, senior manager for public policy at the Computer and Communications Industry Association (CCIA), notes: “While the public debate has centered around a few well-known companies, the truth is that lawmakers will be imposing new obligations on tens of thousands of companies in Europe, most of which are small businesses.”

Behind these efforts is an unspoken implication that e-commerce is somehow contributing to a rise in the sale of counterfeit and illegal goods. But despite the continuous rise in e-commerce over the years, the amount of counterfeit goods in EU trade is falling, according to Global Trade in Fakes: A Worrying Threat, an OECD/EUIPO report. This remarkable trend coincides with several important developments: border agents and shipping companies are working better together to crack down on the brand knock-off trade and better police the shipping container industry, which accounts for 81% of counterfeit goods trade; and the onset of self-reporting procedures and marketplace-based crackdowns has brought needed transparency and accountability to a process which had few viable controls up until recently.

But the debate over what to do now seems to be taking place in an alternate universe, a place where the attitude one demonstrates to platform behaviour is more important than the reality that platform behaviour might be begetting. The tone has become more of a Sergio Leone western than a policy discussion, as one recent tweet suggested.

Regardless of where the DSA lands – and it seems unlikely at this point that the governing bodies of Europe will re-examine the statistical basis on which they have drawn some rather wide-ranging conclusions – we should all reflect on how the European debate on technology can be brought more firmly into the realm of evidence-based policymaking and why this move – so obvious on the surface – is so difficult to realise in practice. For starters, this discussion should serve as a gentle reminder of a frequently overlooked point: the problem of misinformation is much more complex than we perceive. The same logical fallacies we consider misinformation of the worst kind can still find a place in official policymaking –  without even raising an eyebrow.

And there are better ways to handle evidence than the intense, deeply politicised cherry-picking that has gone into the illegal-content proposals. “Handling complex scientific issues in government is never easy – especially during a crisis when uncertainty is high, stakes are huge and information is changing fast,” writes Geoff Mulgan, professor of collective intelligence, public policy and social innovation, in “COVID’s Lessons for Governments? Don’t Cherry-Pick Advice, Synthesize It,” a recent article in Nature, the science journal. “There’s a striking imbalance between the scientific advice available and the capacity to make sense of it,” he adds, noting “the worst governments rely on intuition.”

Researchers have a way of avoiding problems like the ones detailed in this post – it is called “evidence synthesis,” which is “the process of bringing together information from a range of sources and disciplines to inform debates and decisions on specific issues,” according to one definition. It is also a way of making sure that the facts we use for policymaking reflect real market conditions and not the hand-picked realities that special interests would like us to see. It is in everyone’s interest to make sure the evidence is balanced and the standards are robust, sustained and proportionate. It makes for better policymaking – and for better lives as well.

David Osimo is director of research at the Lisbon Council

Fighting Counterfeits or Counterfeiting Policy? A European Dilemma

The regulation on a single market for digital services also known as the digital services act proposed by the European Commission contains very little that is terribly new. To the surprise of many, the long-in-the-making policy update keeps many of the pillars that made the 2000 directive on e-commerce such a success. It keeps the European Union’s ban on “general-monitoring” requirements which could have brought free exchange on the Internet to a halt; it limits the potential liability of firms for aggressively taking down content that violates “community standards” with a good samaritan clause; and it even re-affirms the “country-of-origin principle” around which so much of Europe’s post-war economic success is built.

But there is one area where the proposal reaches dramatically for new ground – and that is in extending the regulation and possible legal liability for the sale of counterfeit and pirated goods online. Concretely, the proposal – which must still make its way through the labyrinthine European legal process before it becomes law – says that “in order to achieve the objective of ensuring a safe, predictable and trusted online environment… the concept of ‘illegal content’ should be defined broadly.” This is anodyne text, to be sure. But those few words could well mask a sea-change in the way platforms are regulated – and the way the fight against counterfeit goods is conducted.

For years, the sale of counterfeit goods online has been fought largely through a voluntary programme of self-regulation. In 2011, the European Commission sat down with platforms and consumer-goods makers and hashed out a Memorandum of Understanding on the Sale of Counterfeit Goods on the Internet. It set up a “notice and takedown” procedure for informing platforms when a manufacturer or government agency saw counterfeit goods being offered online as well as a series of “key performance indicators” to track progress on the speed and thoroughness of removals (the MoU was updated in 2016). In a recent evaluation, the European Commission concluded that the MoU is “a useful tool” for bringing stakeholders together, though it found that the sale of counterfeited and pirated goods remains “a serious problem.”

But the digital services act, if approved, would turn these voluntary commitments into legal obligations – and add the potential for fines that could rise to 6% of global revenue if platforms were found to be slow to respond or remove. Is the effort really necessary? A substantial body of evidence – including data from the European Commission’s own impact assessment of the proposal – indicate that the new terms are at the very least disproportionate to the size of the problem and may well have been concocted with aims that are more political than economic. This could have serious collateral damage: if allowed to stand, it could expose the European Commission as an organisation that responds mostly to pressure from domestic producers while hiding behind the language of protecting consumers. And it sets a low-bar for evidence-based policymaking in general and that could have reputation-damaging effects on the European Commission’s goal of becoming the world’s leading technology-sector regulator.

The plans start to go awry at the level of “problem definition” – a key pillar in any “smart regulation” and a key principle in the European Commission’s own guidelines on “better regulation.” In the impact assessment published alongside the digital services act proposal, the European Commission states that “it is estimated that total imports of counterfeit goods in Europe amounted to €121 billion in 2016” adding that “80% of products detected by customs authorities involved small parcels, assumed to have been bought online internationally through online market places or sellers’ direct websites.” These figures, in turn, come directly from Trends and Trade in Counterfeit and Pirated Goods, the Organisation for Economic Cooperation and Development/European Union Intellectual Property Office study – and they contain by the OECD’s own assessment several major qualifications and likely exaggerations.

For starters, the calculation assumes that counterfeit completely displace sales of legal products, i.e., that every buyer of a counterfeit product would buy the original one, if counterfeit was not available. This is like saying that everyone who bought a fake Louis Vuitton bag would buy the original if the pirated one was not available. In its study, the OECD-EUIPO admits that “this may lead to an inflated estimated value of the detentions in respect to alternative choices, in particular in those subcategories of luxury products where the retail value of the genuine product is much higher than that of the fake product in the secondary markets or that of its cost (e.g. luxury watches).” This is actually an understatement, as the majority of counterfeit goods are found in one sector – fashion products (footwear, clothing and leather articles) – where counterfeit goods are much cheaper than the original. And as often happens, this fundamental caveat is not even mentioned in the impact assessment of the digital services act.

Let’s be clear about this: it might be that the fashion sector does need help fighting counterfeits. But the help would be to the benefit of European manufacturers, not European consumers. Indeed, when it comes to actual consumer harm, it is hard to pin down the effects concretely. In Trends and Trade in Counterfeit and Pirated Goods, the OECD finds “58.5% of counterfeit and pirated products traded worldwide in 2016 were sold to consumers who actually knew they were buying fake products.” When it comes to safety, the potential harm is even harder to assess. On Safety Gate: the Rapid Alert System for Dangerous Non-Food Products, the European Commission’s portal for reporting and removing unsafe items for sale across borders, regulators only confirmed 191 genuine counterfeits out of the 15,459 products reported over a seven-year period between 2010 and 2017.

So to the extent that it exists, the problem is clearly a fraction of the stated estimate in terms of size. But to what extent is e-commerce to blame? According to the European Commission’s impact assessment, “80% of products detected by customs authorities involved small parcels” (small parcels being typically considered as a proxy for e-commerce). But this claim is also based on a misinterpretation. In fact, according to the European Commission’s Report on the EU Customs Enforcement of Intellectual Property Rights, small parcels (post and express courier) account for less than 5% of the counterfeit articles seized in 2019 despite being responsible for 80% of seizures. It is sea delivery instead that clearly dominates the trade of counterfeit articles with 71.1% of all counterfeit articles found there, while making up only 1.6% of seizures. The reason for this difference should be obvious: a small parcel includes fewer articles than, say, a large sea container, which continues to be the workhorse of global goods trade. So while the number of small packages seized is large, the actual amount of counterfeit articles found that way is rather small. The fact is, most overseas traffic is conducted through large containers. Any regulator looking to put an end to large-scale counterfeiting would be well advised to attack the fat end of the wedge here, focusing on the area where we might expect market-distorting volumes of counterfeits to be found. That means closer control of large scale shipments, where larger volumes of potentially counterfeit goods are undoubtedly moving.

E-commerce, as a whole, plays a far smaller role than stated. But since the digital single market measures are carefully designed to impact on the so-called “very large online platforms,” one question particularly needs to be asked: to what extent are platforms to blame for the counterfeit trade that does exist as opposed to stand alone e-commerce websites? Unfortunately, there is no evidence to answer this question directly, but there are proxies: Eurostat data show that in 2019 only 1% of total sales took place on platforms, against 6% on stand-alone websites or apps.

Not only do most transactions happen on stand-alone websites or apps rather than platforms, but in addition, contrary to stand-alone websites, the main e-commerce platforms all have monitoring mechanisms in place. And these appear to be rather effective. Based on the MoU evaluation, 98% of offers notified as potentially counterfeit in 2019 were proactively delisted by platforms, a 12% increase on the 2016 figure. And according to an assessment of the Product Safety Pledge, a European Commission-led “voluntary commitment of online marketplaces with respect to the safety of non-food consumer products sold online by third party sellers,” the share of product listings taken down within two working days was 99.7% for governmental notices and 97% for monitoring public recall websites.

In conclusion, the available evidence raises serious doubts about on the problem assessment of counterfeit goods as conceived in the digital services act. The problem is a fraction of the size mentioned in terms of economic impact and consumer harm; and it is much less attributable to e-commerce platforms than the proposed regulation would have you believe.

But this is much more than a technical issue about the misinterpretation and incorrect reporting of data. The European Commission prides itself for its regulatory performance – and rightly so. The OECD consistently ranks the European Union as the global leader in good, evidence-based, open governance in its Regulatory Policy Outlook. Never more than today, when populism is on the rise and truth risks becoming an accessory rather than the foundation of our democracies, we need to celebrate and protect these principles and practices.

One such principle is proportionality, enshrined in Article 5 of the Treaty of the European Union, which requires “that the content and form of Union action must not exceed what is necessary to achieve the objectives.” This is why so much effort is carried out in EU policymaking on “defining the problem” in qualitative and quantitative terms. If the policy has to be proportionate to the objective, then the correct quantification of the problem is a fundamental prerequisite to designing correct solutions.

The reality is that the problem of counterfeit products is simply of a different order of magnitude from illegal content. The latter is a serious matter that touches upon, and jeopardises, the very basis of our democratic societies. There are systemic trade-offs to be addressed, between protection of free speech and democratic institutions, between encouraging legitimate business models and spreading misinformation. And in those cases, it is clear that platforms play a central role.

The European Commission is to be commended for taking on the historical challenge to set the global standard in regulating online content. But it jeopardises the credibility by placing this effort in the same bucket with fighting fake Louis Vuitton bags. Mixing such different issues with different risks and incentives weakens the case for regulation, raises suspicions of “ad hominem” persecution of US-based platforms and in general makes it harder to find effective solutions to the very real problems we face.

Cristina Moise is senior researcher and head of statistical analysis at the Lisbon Council.

David Osimo is director of research.

Donald Trump, Sedition and Social Media: Will the Ban Stop the Rot?

The debate on Donald Trump’s belated suspension from social media platforms – he has long been in violation of “community standards” that ban the spread of illegal content and would have gotten ousted much sooner had he been an ordinary user – has ranged far from the real issues at stake here. For starters, the first amendment of the United States Constitution guarantees free speech. But those guarantees do not apply to crimes that might be contained within the speech itself. A good example is incitement. It is perfectly legal and even moral to shout “fire” in a crowded theatre if there really is a blaze and people’s wellbeing is in danger. But if there is no fire and the intention is merely to launch a stampede in which people might be harmed, this is most definitely a crime. And the crime is prosecutable and there is no free speech defence in that case.

Trump’s words have long since crossed a line where actual crimes were being committed – and spread via online platforms, including deliberate lies about the sanctity of America’s electoral system, the possible involvement of a political rival’s family in a presidential assassination, allegations about “conspiracies” and “witch hunts” involving the former U.S. president and vice-president in treasonous crimes, and even the bizarre claim – directly contradicting the findings of America’s intelligence agencies – that efforts to hack U.S. elections could be the work of “somebody sitting on their bed that weighs 400 pounds [181 kilos]” and not the work of Vladimir Putin. And these aren’t just lies that he spreads; actual harm has resulted in many cases. The explosion in Washington D.C. on 06 January 2021 was merely an impossible-to-ignore example of the danger malicious lies and implicit calls to violence can induce.

One would think that Angela Merkel, of all people, would understand the toll that deliberately misleading political speech can have on politics itself. Germany itself has a long and tragic experience with the effect that lies can have on democracy; and it has responded with some of the world’s toughest laws against them. To this day, holocaust denial is a crime in Germany – and not just because denying the holocaust is a lie, which it is. But because the lie itself can lead to horrific consequences in a political context if it is allowed to fester. One is left wondering what muse Merkel is listening to these days. Is her opposition to a private-sector-led ban on Trump’s incendiary messaging just the righteous first reaction of a well-educated scientist who grew up in Communist East Germany? Or does German resentment of the rising power of American platforms run so deep that she misses the import of the moment and allows herself to stray wildly from a seminal German position on the issue, one that guided the remarkable return of Germany to the family of nations over the last 70 years?

Less attention has been given to the actual effectiveness of banning users, and here there is a mountain of evidence worth revisiting in this context, particularly in the long fight against terrorism and terrorist incitement online. Some argue that an account banned on one platform will only migrate to other accounts and platforms – and, indeed, many rightwing conspirators merely drifted to other far-right platforms after the Trump ban. These apps have seen dramatic growth in the week following Trump’s ban from social media and the U.S. insurrection. Several platforms, including Apple Store, Google Play and Amazon Web Services have subsequently banned the more egregious among them, including Parler, for also fomenting incitement and spreading the same lies that got President Trump banned in the first place.

However, years of experience with fighting terrorism online show that bans can be very effective. A recent study from the Programme on Extremism at George Washington University demonstrates that – in the case of Islamic State of Iraq and the Levent (ISIS) – the suspension of one or two high-profile English language accounts had a substantial effect on efforts to spread illegal content elsewhere, including the congruent effort to rebuild the terrorist communities dispersed by the ban in other places.

Europol has had an interesting experience as well. It was able to radically slow the spread of ISIS content on Telegram, a messaging service popular with jihadists, by working directly with the platform in 2019. Later, both Telegram and Europol found that the effort needed to be ongoing; an initial 2018 one-off effort had proven less effective. Since then, there has been substantial follow up and monitoring, which have had a powerful effect on curbing the spread of material that incites violence as well as on the “platform migration” that others might have predicted, according to a study by the UK Centre for Research and Evidence on Security Threats (CREST).

What these studies consistently show is that, despite the openness of the Internet and the endless possibility to create new platforms and open new accounts, banning users who spread lies and illegal content – and the apps on which they spread – can achieve a significant impact in slowing down terrorist outreach and recruitment capacity, especially when public and private actors collaborate.

But platforms were still hesitant to ban Donald Trump. They argue – and apparently believe – that they perform an important public function in relaying the words of a democratically elected president of the United States even when the information those words contain is patently false and they don’t themselves share the views or intent behind the message. There is some merit in that point of view – but not without qualification. The fact is, the presidency of Donald Trump left us with the question, what do you do when the elected leader of a democratic country turns rogue? Do we allow the president to violate the law repeatedly by dutifully spreading his lies and ignoring the mounting violence? Or do we better serve justice by holding elected officials to the same legal standards and norms of public responsibility as the rest of us? It’s not like depriving President Trump of a social-media megaphone deprives him of a voice – he still has the podium of the White House and the ever-present eyes of live television and the White House press corps to carry his words to the world and parse his every utterance. But does he really need a special channel to spread disinformation that others are not allowed to spread? The debate over how social media platforms and the media itself should deal with a rogue leader in a democratic state has only begun.

PAUL HOFHEINZ
Paul Hofheinz is president of the Lisbon Council.

DAVID OSIMO
David Osimo is director of research at the Lisbon Council.

Country of Origin: New Rules, New Requirements

The European Commission has sent a strong signal: like the electronic commerce directive (2000) before it, the new digital services act will “preserve” the “country of origin principle” – a core tool in the European Union’s legislative chest. That principle holds that if a product or service is legal and licensed in one European Union member state that product or service should be good enough for sale without additional licensing or restrictions in another.

In the real world, this has important effects. It means that a business that is licensed in one EU member state is free to sell goods and services in any other without additional restrictions. It is a core principle behind Europe’s massive €17 trillion [$20 trillion] single market – the standard which keeps false borders from arising where real ones once stood. And that sizable market – despite a host of evident problems – is the grease which keeps entrepreneurs in relatively small countries rolling forward even in difficult times. A large proportion of European Union small- and medium-sized-enterprise trade is accounted for by crossborder imports and exports.

But the country of origin principle is also something of a mirage – a stopgap measure created to allow speedier progress in areas where deeper levels of integration might otherwise be elusive. The fact is, most EU law remains national. It is written by national parliaments, implemented by national governments and enforced by national courts. The country of origin principle sits in the middle of this; it amounts to a vote of confidence among EU member states that they believe their EU allies’ laws are as robust and effective as their own, and therefore they honour those laws as their own. But this is not the same thing as having a single set of European rules governing trade or a fully harmonised legal system among 27 independently constituted states. And that hidden fragmentation can cause very visible problems. Companies providing services across borders often find new restrictions awaiting them in target markets whatever the “principle” might hold.

This is especially true for digital enterprises whose businesses are theoretically “borderless,” but which in reality often find themselves dealing with a patchwork of diverging national rules as they seek to do business in other EU member states or expand across the EU. And it has become even more true as individual EU member states – notably France and Germany – have promulgated robust national laws governing the practice of e-commerce on their territory including rules for content management and taxation.

Whatever their intention, these national rules often hamper digital service providers’ ability to use that market for what it was intended: a broadly-conceived piste for successful commercialisation of great ideas and a source of inspiration for European entrepreneurs whose companies might someday grow into the world’s largest and most successful, a place where European consumers can find the best products and services at decent, competitive prices and the powerful engine behind a European economy which seeks to project fundamental rights and responsible regulation into global markets on the back of the economic success that Europe enjoys there.

There are, in fact, some important exceptions to the country of origin principle – labour law, for one. Companies established in one EU member state cannot send workers to other EU member states (except for short postings) without agreeing to obey the legal acquis of the member state where the workers are posted, including the payment of social security tax and sectorally-established wages. Consumer law is another exception. Recent laws have granted consumers the right to redress in the “country of sale” where they purchase a good. No one is disputing the principle involved here – consumers have a right to redress and if laws created to allow that are too distant for them to use then what kind of redress is there? But the practice itself has proven problematic – offering essentially two sets of rules. One where country of origin rules apply. And another where local point-of-sale logic prevails. Who’s right in these cases? In a digital economy where crossborder service provision is the rule and not the exception, whose rules apply?

The European Commission has sought to diffuse these tensions by setting up elaborate “reconciliation measures” intended to help countries speak to each other about emerging issues before companies in one country encounter problems or break laws in another. According to the original electronic commerce directive, EU member states can take blocking measures against products arriving from other member states – but only after 1) the country of destination first informs regulators in the country of origin that the goods are problematic, and 2) the country of origin member state fails to act on the complaint. The European Commission must also be notified and has the right to examine the rules in question and file “infringement” charges in cases where single market principles have been violated.

But even this elaborate machinery hasn’t failed to eliminate ongoing market fragmentation in an age where borders themselves have long since become less porous. Italy and Spain, for example, have promulgated laws which would require online platforms to report their local sales revenue even if the company is registered elsewhere. Latvia has objected to Russian language programming on the grounds that hate speech laws in Sweden and the United Kingdom, where the Russian programming is registered, does not meet the local Latvian standard. Germany, too, has weighed in – adopting the famous Netzwerkdurchsetzungsgesetz (NetzDG) (2017), which puts new transparency requirements on all platforms operating in Germany and mandates a 24-hour take down for illegal content. France followed suit through its proposed Loi contre les contenus haineux sur Internet (the “Avia law,” named for sponsor Laetitia Avia), though the 2019 law was recently struck down by a French court.

None of these objections are unreasonable. But they do pose difficulties for companies doing business in the EU. And they’ve caused unwelcome collateral damage for European companies. Conceived in many cases as a way of reining in the market power of large, international platforms, the high compliance costs and tough-to-overcome obstacles generated by diverging laws have fallen squarely on smaller companies and startups. Big companies have big legal offices which can successfully maneuver their way through diverging rules that hamper efforts to buy and sell across borders. Smaller companies do not. This is one reason why Europe’s much vaunted single market remains more vision than reality. And it is a reason why large, global platforms have had an easier time competing across Europe than the homegrown champions. The national laws are well intentioned. But their effect – in the absence of careful coordination at the European level – are splitting the market and having a deleterious effect that leaves smaller, local companies at a distinct disadvantage.

So what then should the European Commission do? Two things:

  1. The proposed revision to the e-commerce directive offers a great opportunity: first and foremost, it is a chance to recommit Europe to the country of origin principle which has powered the growth of the single market since at least 1993. But it is also a chance to clarify dramatically how the country of origin principle should work in practice. The new rules could, for one, include better definitions of key concepts like “place of establishment” and “centre of activity” as well as clearer procedures for governance and compliance. This might include, inter alia, clarity on whether media companies based in one member state should be required to comply with unique national laws in other member states where their offerings are shown (such as the cumbersome requirement for financing local-language content in France). And the question of tax reporting – while well outside the scope of the e-Commerce directive – is another bone of contention. As is the growing plethora of diverging and sometimes conflicting national regulations on hate speech and mandatory take-down times.
  1. And there’s a second thing: The country of origin principle is only one tool in the European kit. A more powerful tool is full harmonisation – a system under which European countries could agree a single set of rules, enforceable by a European-level agency and/or political body. To date, the European Commission has sought to mitigate market fragmentation by pursuing legislation which runs ahead of national initiatives. The recent European Union regulation on preventing the dissemination of terrorist content online (2018) is one effort intended to run ahead of similar initiatives emerging in France and Germany. But initiatives like this could be made more systematic. Even more importantly, they could be given added weight through a permanent institutional presence – a European regulator, in other words – that was empowered to insure that European law is consistent with the continent’s single-market aspirations and that one set of common rules are being applied unswervingly across the continent.

And here’s where the story gets really interesting. Contrary to received wisdom, the technology sector is not avoiding regulatory scrutiny. To the contrary, some companies have stepped up and asked for more oversight. In a key paper drafted over a two-year period with extensive cross-industry input, EDiMA, the Brussels-based association representing 15 of the world’s most successful technology companies, spoke warmly of “an EU-level body” that “at the very least should function as an EU-level coordination mechanism for designated national authorities capable of delivering legal certainty and consistency for all parties.” EDiMA added: “Crucially, the focus of an oversight body’s work should be restricted to the broad measures which service providers are taking. It should not have the power to assess the legality of individual pieces of content and it should not be empowered to issue takedown notices, which is the remit of the courts.”

Across the Atlantic, tech CEOs have been racing to Washington DC – and calling for more regulation. As in Europe, the goal is not necessarily to bring more rules to the sector but to create a single rule book that would be applicable across all 50 U.S. states. The worry is that state-level legislation – such as California’s strict new privacy laws – could cause the vast American market to splinter back into 50 mini-states.

Success in the digital field is not beyond us. Ultimately, it’s about providing great products and services, using our immense scientific knowhow to put better goods and services in people’s hands at affordable prices. But getting there will require imagination and boldness a bit greater than what is on display right now. It’s not enough to regulate technology markets as if the only concern were to punish the winners. We must also learn to give birth to the kinds of companies that conquer global markets – and we must give those companies the market access they need to be successful. First and foremost that means creating a single market as readily accessible to small companies with big ideas as it is to large, global ones. A hefty legal affairs department should not be a prerequisite for success in Europe. A stronger single market – with less friction, greater clarity, broader scope and more consistency – would be the most powerful catalyst European regulators could deliver for success.

PAUL HOFHEINZ
Paul Hofheinz is president and co-founder of the Lisbon Council.

AMARINS LAANSTRA-CORN
Amarins Laanstra-Corn is research associate.

DAVID OSIMO
David Osimo is director of research.

Disinformation and COVID-19: Two Steps Forward, One Step Back

Few issues have brought the challenge of Internet regulation to the fore more than the recent COVID-19 crisis.

The global pandemic and ensuing lockdown offered a near-perfect petri dish for testing the commitment of platforms to the spread of correct, accurate health information under the most trying, jarring circumstances – and for making sure that evil doers weren’t able to deflect blame from themselves and sew mistrust in public institutions at a crucial, delicate moment.

False information that could harm the health of users is banned on most platforms; community guidelines mandate clearly that harmful and misleading user-generated health information is content whose spread they should block and whose lies they should remove. So there’s no question about the platforms having or not having a mandate here. The issue is, how seriously do they take it? And how effective are the policies they enact at stopping the spread of harmful disinformation?

The answer seems to be two-fold. As the COVID-19 virus spread, the platforms showed themselves ready to take unprecedented steps to attack the problem, including (in the case of Facebook) putting correct, public-health-authority-approved advisories and information at the top of every global news feed across their two billion-person network. But the unprecedented effort also revealed how serious the problem is – and how easy it is to take advantage of the usual hands-off approach. Put simply, harmful disinformation continued to spread – though it did so in much lower volumes than would have happened had it not been contained as a matter of official policy.

Early in the crisis, platforms found they were fighting a complicated war on multiple fronts. Not the least of their problems were state-funded disinformation campaigns and the state actors behind them: The Chinese government, for one, was working overtime to spread lies and re-position itself as a helpful foreign ally despite having contributed mightily to the virus’ initial outbreak and spread. In an effort to reassure an increasingly nervous public, U.S. President Donald Trump suggested people might be able to cure the virus by ingesting bleach.

Harder to stop were private citizens, some of whom make careers by promoting conspiracy theories and drawing attention that way. Their motivation is hard to ascertain. It could be profit; it could be something else. But either way their actions are deeply harmful. Even more nefarious is the perilous combination of the two: conspiracy theorists whose voices are somehow amplified by state-run media campaigns and foreign-operated bots. Put simply, the disinformation campaigns on COVID-19 evolved like a virus, mutating in response to efforts to extinguish it. Platforms have, for example, taken down many fake accounts in recent years; but governments with active disinformation programmes have learned they can avoid fake-news filters by waiting for domestic purveyors of disinformation to post content – then using their bots to spread those lies.

One way of dealing with the problem was to make sure people got good, accurate information not from conspiracy theorists or twisted government propaganda but from reliable public health authorities. And that’s what Facebook did; official health information on stopping the spread of coronavirus now appears at the top of every Facebook news feed, worldwide. Twitter, too, took unprecedented steps, providing alternative sources of information alongside factually incorrect tweets from U.S. President Trump and Chinese Foreign Ministry Spokesperson Lijian Zhao.

How effective has this effort been? A study in April 2020 from the Reuters Institute for the Study of Journalism (working with the Oxford Internet Institute and the Oxford Martin School), found that, while the number of fact checks of English-language online content soared 900% in January-March 2020, 59% of the posts rated as “false” by independent checkers could still be found online at the end of the period; YouTube and Facebook did somewhat better; only 27% and 26%, respectively, of posts rated “false” remained up. The study also found that disinformation from official government sources had an unusually large impact; state-backed media – including government-run “news” outlets in China, Iran, Russia and Turkey – produce relatively little content but have massive “engagement” with English-speaking audiences around the world – roughly 10 times more than the BBC. The Reuters Institute calculates that official disinformation constitutes only about 20% of the disinformation circulating online, but it gets more than two-thirds of all social-media engagement.

The Reuters Institute summarises the themes and between-the-lines messages emerging from these government-backed campaigns: 1) criticism of democratic governments as corrupt and incompetent, 2) praise for their own country’s global leadership in medical research and aid distribution, and 3) promotion of conspiracy theories about the origins of coronavirus.

Perhaps the best illustration of how – and why – this phenomenon is so hard to regulate on platforms that insist on remaining open is the case of Plandemic, a 26-minute film produced with the input of a discredited anti-vaccination campaigner. The film – which alleges that a shadowy cabal of global elites are profiting from the spread of coronavirus and the coming effort to vaccinate against it – has been removed from all mainstream social-media platforms after being fact-checked as false and misleading. But not before it was viewed eight million times, shared 2.5 million times in a three-day window and spread well beyond the Facebook, YouTube, Twitter and Instagram sites where it was initially uploaded (QAnon, a 25,000 member group that pushes rightwing conspiracy theories, played a crucial role in its early spread). A New York Times investigation shows how the audience started small, but, relying on a re-post from a popular television “doctor” (with 500,000 Facebook followers) and a mixed-martial arts fighter (with 70,000 Facebook followers), it went viral and soon broke into the mainstream U.S. political debate. The debunked film eventually made its way to the website of Reopen Alabama, a 36,000-member group pushing for the U.S. state to end the lockdown despite health guidelines. In the end, the momentum that digital technology gives conspiracy-driven communities like the ones behind Plandemic could be slowed but proved difficult to stop.

The more important news may not be that truth filters don’t always work or that the system can still be gamed. Much more significant is the new approach emerging over disinformation; as of 12 May, more than 50 million pieces of misleading health-related content had been flagged and given warning labels on Facebook alone; simultaneously, Facebook reported more than 350 million click throughs to correct and accurate information on the pandemic’s spread and the safety measures expected of the population. 

Regulators and platforms showed that – in a public emergency – they can work together to get reliable information out to people. And some platforms also showed that, when it comes to public health, they are not prepared to let political leaders abuse their platform to spread disinformation – even when it means facing the full wrath of those leaders for calling “malarkey” over their lies. These are important precedents, going forward. A new atmosphere has been created, a new spirit of public-private collaboration for more socially responsible outcomes and – perhaps most importantly – a re-dedication to science and evidence within the bellies of the platforms where so much social interaction now takes place. It’s up to us all to demand only the best of our companies and public institutions, to consolidate these gains and to make sure the Internet remains a tool for spreading democracy – and not for destroying it.

PAUL HOFHEINZ
Paul Hofheinz is president and co-founder of the Lisbon Council.

Illegal Content: Safe Harbours, Safe Families

The rules on illegal content are clear: if it’s illegal in the real world then it’s illegal online. Platforms and regulators have seldom sparred over this; the community guidelines enforced by most platforms are straight-forward. Content suspected of being illegal can be flagged for inspection – or blocked at upload – and should be removed as quickly as possible if it turns out to be unlawful. This zero-tolerance rule applies to many types of illegal material, but it applies first and foremost to child pornography and images of children being abused. 

Many platforms have invested heavily in artificial-intelligence to help them spot and block illegal content before it even goes up – so much so that some images, like the iconic picture of a naked Vietnamese child running to escape an American napalm attack, have been incorrectly flagged and temporarily barred (a subsequent human review saw the content restored and the algorithms tweaked).

The questions become trickier when legal liability is brought into the picture. In 1996, the United States set the rules that would become the standard; according to section 230 of the communications decency act, platforms would be expected to “use good faith” to restrict access to content that was “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable” and would enjoy legal immunity from prosecution over content that users posted (including prosecution for removing user-posted content that fell foul of the platform’s community guidelines). In Europe, article 14 of the electronic commerce directive (2000) did the same; it said platforms were not liable for content posted on their site if they had no prior knowledge of the illegal nature and if the platform acted expeditiously to remove it once notified.

The disturbing thing is the amount of child-abuse material available online is rising. The volume of content hosted on websites containing sexually abusive material has increased a staggering 70% since 2017, according to an Internet Watch Foundation report.

To be clear, the platforms themselves are not guilty of this rise; much of the material appears on stand-alone websites. Shockingly, 90% of those websites originate in Europe.

No one supports the use of the Internet to aid and abet crimes against children. But the question of whether the rules are tough enough – and whether platforms are doing enough – is in clear dispute. But so is the flip side of the argument: platforms use filters to flag and remove content; have these become too sensitive? Is the law inching towards censorship and surveillance? Do lawmakers need stronger tools for tracking criminal activity online? Or is there an emerging threat to privacy slipping in under the banner of stopping crimes we all know and feel to be horrific?

And perhaps more pointedly, is the horrendous fact of the continued existence and spread of online child pornography – and the evident need to respond with strengthened measures – being used as a convenient screen for compelling platforms to allow political parties – some led by powerful politicians – to spread lies without being challenged?

In May 2020, U.S. President Donald Trump announced a formal “review” of the section 230 exemption, charging the platforms with political bias after one platform posted a link to correct information next to a tweet containing proven and provable lies. Earlier, U.S. Senator Lindsey Graham, a South Carolina Republican, introduced a sweeping bill on Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act that would allow the platform liability exemption to be lifted in certain cases. Under the proposed rules, a 19-person committee would elaborate a code-of-conduct on content-removal (which the U.S. Attorney General and the U.S. Congress would ratify and amend); companies that failed to meet the tough standard could see their legal immunity from prosecution lifted, opening the door to lawsuits from aggrieved parties who felt that harm had been generated or their rights abused by material circulated on the platforms.

Law enforcement officials – including U.S. Senators sponsoring the bill – say the platforms still don’t do enough to stop illegal content from spreading; with the support of several Democratic Senators, they seem to be carving out a middle ground where platforms could keep much of their legal immunity but where, crucially, guidelines approved by the U.S. Attorney General (currently a controversial Republican) could be used to lift or suspend it in some cases.

Privacy advocates see additional threats; they say the law could be used to force companies to open backdoors on end-to-end encryption, an increasingly popular way of communicating and exchanging information. Or it might possibly lead to pre-emptive curbs on the use of end-to-end encryption itself.

The European Commission has also promised new rules “for a more effective fight against child sexual abuse” later this year, according to the 2020 work programme put forward by President Ursula von der Leyen. And the U.S. law’s final contour isn’t known. To be sure, class action suits are how the U.S. established high product safety standards in areas as diverse as automobiles, children’s toys, lawnmowers and airplanes. But the risk is the power the proposed law would give political figures to lift immunity and allow lawsuits against platforms which challenge their authority on the most basic points of truth and evidence. Recent history has shown that U.S. administrations – and this one in particular – are not always impartial and don’t shy away from using the tools of state for political ends.

Which leaves the horrific problem of child abuse online. Whatever the modalities, regulators should set aside their potentially harmful games and work with industry and privacy advocates to curb this scourge that no one wants and everyone would like to see end. Its rise is a shame and a disgrace that should concern us all. But attitude and scorn are not sufficient tools for fighting it. And political witch hunts will be very distracting and even less effective. The best response would be to take the issue seriously, craft joint responses and tackle the problem collectively. That’s what voters want. That’s what society needs.

PAUL HOFHEINZ
Paul Hofheinz is president and co-founder of the Lisbon Council.

Regulation and Consumer Behaviour: Lessons from HADOPI

The tough French enforcement rules on copyright infringement, dubbed the HADOPI laws, named for the High Authority for the Dissemination of Works and the Protection of Rights on the Internet agency set up to enforce them, can be considered the posterchildren of the so-called “graduated response” policy, an effort to fight illegal file sharing with the heavy arm of the law.

Under the rules, consumers are directly contacted if they are found to be sharing content illegally – and eventually given three warnings, first via e-mail, then via registered mail. The ultimate sanction, though, was a fine of up to €2000 and, most controversially, the temporary suspension of Internet access. To enforce this, HADOPI needed the full collaboration of Internet service providers – which they were given by law – to provide the authority with the personal contact information of the user and to invite the user to install a navigation filter.

Did this 2009 law work? As it is often the case with public policy and technology, it is not easy to give a clear answer but luckily the law has been widely studied by scholars and legal analysts.

First, the law certainly had an impact on public perception. By 2017, more than nine million first warnings had been sent; 846,018 second warnings; and 7,886 third warnings, of which 2,146 cases were sent to prosecutors. Even more than the actual numbers, the law was widely publicised, attacked and debated, so that it also affected even those users who did not receive the notification.

When it comes to actual results, the evidence is more contradictory. In particular, an article in the Journal of Industrial Economics, provides evidence that HADOPI caused an increase of 22.5% in legal music file sales in France as compared to a control group of countries which did not implement a similar legislative measure, and that this increase was concentrated in the genre with a higher piracy rate (notably rap). The study has been criticized, but it is a robust, peer reviewed study published in a prestigious journal and based on real market data. Things don’t get much better than that on policy evaluation.

The data suggests that HADOPI was, at least partially, successful in changing the behaviour of consumers. Yet at the same time, if we look at long-term trends in revenues, it is clear that it was not HADOPI which solved the problem of the music industry – its positive impact was too small to change a major declining trend. Legal downloads of music, such as those impacted by the law, were only able to slow the decline. As my Lisbon Council colleague Paul Hofheinz argued in a previous Intermediary Liability Blog post, the return to growth in the music industry came mostly from the emergence of the subscription based business model – underpinned by innovations such as affordable mobile broadband rates and the rise of smartphones. Interestingly, the same business model is being successfully replicated across different industries, from information technology (software as a service) to e-commerce (Blue Apron meals) to manufacturing (John Deere agriculture technology), leading to the emergence of what some call the “subscription economy.” Online subscriptions is already hailed as the secret behind the return to profitability for newspapers such as The New York Times. And it is also providing an opportunity for a new fairer model on personal data, as correct data processing is clearly linked to benefits in terms of quality of service.

On a similar note, research from the University of Amsterdam points out that “the risk of getting caught” is not a primary factor working against accessing illegal content. It is actually the least important reason across all countries, including France, well behind ease of use and findability – not to mention price.

One of the effect of HADOPI was not to reduce overall piracy rates, but simply to encourage users to move from peer-to-peer to other platforms when accessing copyrighted content.

All this is not an argument against government intervention per se. For one, it could be that policies such as HADOPI helped to change customers behaviour towards paying for content in a way that ultimately also boosted paid alternatives, such as the flourishing subscription-based business model. It could be that the failure of similar coercive measures to reverse the decline helped to convince the music industry to accept the subscription-based model.

In any case, it is a healthy reminder that public policies, even when ambitious and highly restrictive, seldom manage to achieve an impact comparable to the introduction and scaling up of new services that meet customer needs. And that when facing new threats, forward looking experimentation is a more effective approach than repression.

DAVID OSIMO
David Osimo is director of research at the Lisbon Council.

Incitement to Terrorism: Are Tougher Measures Needed?

Terrorism is one of those rare domains where most people agree: it is a despicable practice that deserves no tolerance and ought to have no place in this world. Innocent lives are shattered, young minds are poisoned, rule of law is undermined and democracy is weakened. There is no free-speech opt-out for violent crime or incitement to violence. The violation is not the speech – it’s the crime behind it. And the violence that “speech” begets.

The effort to rid the Internet of terrorist content enjoys near unanimous support. All platforms have “community standards” that ban content that incites terrorism; and most of them act with speed and decisiveness to avoid allowing their platforms to serve as organising, communicating or propagandising hubs for terrorist organisations.

But the battle is difficult. Most platforms have long since put in place filters to remove and contain violent content before it can be uploaded or spread – and banned users and links that are palpably linked to terrorist or terrorist-based activities. But the advent of live broadcasting has made it harder for these methods to work. The horrific attacks in Christchurch, New Zealand – in which 51 people were killed, many of them in a live broadcast – are a case in point. More than 4000 people watched the broadcast on Facebook before it was “flagged” by viewers after an excruciating 29 minutes online. Once removed, the video – which resembled a live-action video game – continued spreading among sympathisers, who were able to make copies and copies of copies, many of which avoided automatic filters by having slightly altered content. In the end, Facebook says it took down a staggering 1.5 million videos of the attack within 24 hours – the benchmark for speedy removal these days (Facebook says 1.2 million of those videos and images were detected and blocked at upload). But six months later, a report by NBC News found videos and photographs from the shooting still online – including on some Facebook pages.

Regulators responded with a flurry of new coalitions and tougher rules. The European Commission, for one, convened the government-level EU Internet Forum, a high-level public-private body set up to fight terrorist propaganda online – and proposed a new regulation that would penalise platforms that allowed terrorist content to remain up for more than one hour (the law is being discussed in the European Union’s complex decision-making process). Elsewhere, industry and government united to launch the Christchurch Call to Action, in which they committed – at G20 level – “to detect and immediately remove terrorist and violent extremist content online.” Other methods include tighter commitments to ban accounts that have posted questionable content “without context,” i.e., that seem to spread a terrorist message rather than to comment on it. As well as a “database of hashes,” – currently containing more than 200,000 images and data points – which platforms can use to block content across the Internet (it makes terrorist content that has appeared in one place easily identifiable by other platforms so it can be blocked before it goes up).

National governments responded, too. Germany and France passed tough new laws requiring companies to remove terrorist content – or face heavy fines. The law has had some effect; at a minimum it showed governments’ intent to fight this scourge seriously. But, broadly speaking, the tougher rules don’t seem to have pushed platforms to go much further than their own community guidelines had already mandated.

And it created a bizarre anomaly in Europe: two European Union member states have strong laws, each with their own local quirks; twenty-five member states have only “guidelines.” The proposed European Commission regulation would establish the strict one-hour deadline adopted by the French as the pan-European standard.

But the problem is not the length of time that terrorist content stays up. The fundamental challenge is “virality” – the speed with which terrorist content can spread before it is removed. Much effort has been made to cut down these times, and, indeed, the platforms do seem to be doing better. YouTube, for one, has managed to raise the number of terrorist videos viewed nine or fewer times before removal to 50%, up from 6% in 2017. The number of videos watched more than 100 times before removal has fallen to 25%, down from 70% in 2017.

The bottom line is: banning terrorist incitement from the Internet is one area where we can and should work together. It’s up to regulators to set the tough, uncompromising targets that public safety requires and that citizens demand. But it’s up to platforms to make sure that their compliance comes as close to perfect as possible, employing every tool at their disposal to keep content that incites violence off of platforms and back in the gutter, where it belongs. And there is some evidence that is happening. Since the Christchurch tragedy, most of the large platforms have invested heavily in artificial intelligence to improve their detection systems and make removals more permanent. Governments have pitched in, offering stepped up “coordination and information sharing” across borders regarding terrorist incidents as part of the European Union Crisis Protocol (information sharing can only happen in strict compliance with the General Data Protection Regulation). The European Union Agency for Law Enforcement Cooperation (Europol) has taken a leading role, too, stepping up its coordination activities to make national and local responses more robust. Several leading universities have pitched in with advanced-research programmes on better detection and prevention.

The fact is, terrorist content has no place in a modern, democratic society. The Internet can and should be a vehicle for discussion, debate, connection and knowledge exchange – not a place where young minds are radicalised or sick minds made more ill. Tough sticks from regulators carry some punch. But so do tough standards from platforms. We all can do better at this. And we should.

PAUL HOFHEINZ
Paul Hofheinz is president and co-founder of the Lisbon Council.

Creative Works, Copyright and Innovation: What the Evidence Tells Us

The Internet has given birth to an explosion of creativity – or is it more of an implosion, as some argue?

Creative-industry lobbyists are fond of talking up a “value gap,” arguing that revenue generated online through advertising is not being shared out equitably by platforms with creators – or, to be more precise, with the publishing companies that own catalogues of material and represent artists. This, in their view, has a deleterious effect. It has led to a fall in funding of quality cultural content; it is inequitable to the artists whose material is attracting the audiences; and it has provided safe harbor to a tsunami of illegal content and dubious media where high-quality, well-curated cultural works once stood.

But do the facts support this case?

The evidence points to some interesting conclusions. At first, piracy did blow a hole in the creative arts – or at least in the models that existed for monetising cultural offerings up until the spread of digital technology and peer-to-peer file sharing. As recently as 2001, the music-recording industry saw $23.5 billion [€26.1 billion at the then existing exchange rate] in annual revenue – mostly from the sale of CDs. But Napster and peer-to-peer file sharing put an end to that gig. Consumers loved the ease, speed and breadth of choice of file sharing. And maybe even some artists liked it, too (many, like the artist formerly known as Prince, supported the early rise of direct downloading). But could the industry afford to give away work in which it had invested so heavily? And what about the artists themselves? How would they be paid?

For a while, regulators hacked away at the problem – looking to tighten laws here and there and setting up agencies, such as France’s High Authority for the Dissemination of Works and the Protection of Rights on the Internet (HADOPI), to certify that sites were built around content for which the site owner had legally retained copyright. This had some effect, but in the end the heavy regulation approach proved to be a sideshow. Vastly more successful was the business model innovation that went on alongside it, in particular the effort to monetise content through new vehicles, such as subscription-service streaming and ad-revenue sharing. This brought a host of benefits – and a few disadvantages. On the positive side, it made consumers very happy. The catalogue available to them was enormous, and the targeting precise thanks to data analytics and purchasing histories. The price was right, too: unlimited content for a month or so at the cost of roughly one CD or video.

But the economics of the business changed, too. After hitting a modern low of $14 billion [€10.5 billion] in 2014, music-industry earnings have increased year-on-year at a steady pace.

Revenues are still off their 2000 peak. But the trend is in the right direction. And the value of offerings to consumers has risen by 40%, representing a massive expansion of available content reaching historically unprecedented levels of consumer choice and value.

But have the new economics led to less quality content and poorer artists?

The streaming platforms, for one, have invested heavily in high-end content, and the shows they produce are more than making the quality cut. Fleabag is a case in point. This Phoebe Waller-Bridge-led vehicle went direct from Edinburgh fringe to Amazon Prime – and walked away with a staggering trove of accolades, including “most outstanding comedy series” from the Primetime Emmy, Golden Globe, Screen Actors’ Guild (SAG), Critic’s Choice awards and more. Roma, a brooding art-house film directed by Mexican director Alfonso Cuarón, was nominated Best Picture at the Academy Awards (the first streaming service-funded production to be received this way) – and went on to win the coveted Golden Lion at the Venice International Film Festival. Other high-end productions funded by platforms include Netflix’s The Irishman (10 Academy Award nominations, including best picture); Amazon Prime Video’s The Marvelous Mrs Maisel (Primetime Emmy, Screen Actors’ Guild, Golden Globe awards for “best television series”) and Apple TV’s The Morning Show.

But what about the artists? The fact is, the revenue sharing that goes on in these industries often takes place under a cloud of secrecy and untransparency. This is not entirely the platforms’ fault; they operate only with the good grace of publishing companies and major studios, all of which have granted rights to their catalogue but usually on the condition that the amount of money re-distributed is treated as a “commercial secret” subject to strict non-disclosure agreements. So we can’t tell you much about the amounts platforms are paying to the studios – or what happens to that revenue after it goes to those studios (i.e., how much of that money reaches artists and which artists receive it). What we can tell you is, if you are a YouTube Partner Programme organisation (with 1000 subscribers who have watched 4000 hours of your videos in the last 12 months), Google will pay you around $18.00 [€16.65 at the May 2020 exchange rate] per thousand “ad views,” according to industry sources (the amount varies based on a complex formula). Globally, it calculates to roughly 68% of the AdSense revenue generated per ad, according to industry analysis.

The mechanics are similar at Spotify; industry sources report the company pays around 52% of the revenue it makes to record companies (two of which hold equity stakes in Spotify); the money is then handed out to artists, who receive “15% to 50%” of the record company’s cut depending on a complex formula. On its website, Spotify adds: “In many cases, royalty payments happen once a month, but exactly when and how much artists get paid depends on their agreements with their record label or distributor. Once we pay rightsholders according to their streamshare, the labels and distributors (collection societies and publishers, in the case of songwriters) pay artists according to their individual agreements. Spotify has no knowledge of the agreements that artists sign with their labels, so we can’t answer why a rightsholder’s payment comes to a particular amount in a particular month.”

Other business models raise different questions. TikTok, an app owned by China’s ByteDance, enjoys 800 million users worldwide. Like other platforms, it makes money by placing ads against user-generated content, but that revenue isn’t shared. Instead, TikTok expects “content creators” to be remunerated offline by agreeing to promote and/or wear products in their videos – through product placement, in other words. Most of what you see “influencers” doing on TikTok was paid for – you just don’t know by whom. And the quality of the content – if I may say so – is often of limited cultural value.

So what’s the bottom line? The Internet has opened up a vast platform for content distribution on a global scale – and a complex mechanism for monetising it. Contrary to the lines of attack spread liberally around some corridors of power, the Internet has been both useful and profitable for small artists – many of whom are able to find bigger global audiences for niche and native-language offerings that used to remain strictly local. Cultural productions have not suffered either; major productions – often thanks only to the support and vision of the platforms, which compete for audiences now, too – continue to be funded. The quality and range of content on offer has never been so great.

But the business model behind all of this creativity is rather different. The margins are lower – driven by new supply and aggressive competition. But there are margins. Piracy, which defined the early days of the Internet, is on the wane, with many, many consumers turning enthusiastically to relatively cheap, easy-to-access online distribution.

But there is less room for middle men. Less easy profit for old-school content distributers. And, when it comes to local journalism, the bedrock of democracy, the new terms of reference have been catastrophic. Big-brand journalism like Le Monde and The Guardian has adapted – for them the Internet is a place to reach a global audience and leverage a known brand. But local journalism – whose audience is by definition limited – has not fared so well and may require a public intervention if we are to save it.

PAUL HOFHEINZ
Paul Hofheinz is president and co-founder of the Lisbon Council.