In this blog post, our Policy Director Craig Fagan shares his views on the trends we will be tracking in 2017.
Events in 2016 brought the challenges of rising inequality to the fore. In line with Sir Tim Berners-Lee’s vision of a web for everyone, in 2017 we will redouble our efforts towards digital equality. We will continue to work for affordable access for all of the people to all of the internet; for everyone’s right to speak freely and privately online; and for vital public data to be made openly available to all online. In all of our work, we will focus particularly on overcoming gender divides that plague every dimension of the digital revolution.
However, even while these fundamentals of an open and just web continue to demand attention, new threats and opportunities are emerging. Here are some of the top policy trends we’ll be tracking this year. We welcome your thoughts on if and how the Web Foundation should engage in these emerging issues.
1. Net Neutrality
The past three years have been very positive for net neutrality – the principle that all internet traffic should be treated equally, with no special paid fast lanes or blocks. After massive popular outcries, net neutrality has been enshrined in law in the US, Brazil and the EU, and India passed strong regulations.
However, all these gains are up for grabs. President-elect Trump’s appointments include many figures who have opposed net neutrality in the past. Across the Atlantic, European countries must now turn good rules into effective enforcement. Meanwhile, in the global South, the tension between net neutrality and zero rated services which purport to offer an on-ramp to the internet is becoming ever greater, and weak regulatory capacity enforcement remains an issue.
The digital rights community will need to be on its toes in 2017 to successfully challenge efforts to roll back net neutrality or green-light services that undermine the idea of a web equally for everyone.
2. Artificial Intelligence
Artificial Intelligence, or AI, is often considered a cluster of technologies (such as machine learning) that allow computers to take the data that they are given and learn from it, developing new skills and capabilities. Proper controls, both self-imposed by companies and regulated by governments, are needed to ensure that AI is developed and deployed in an open and accountable way.
The impact of AI is significant. Many social groups are not enjoying the benefits of the digital revolution, and the expansion of AI is not going to help counteract this problem. Estimates are that while nearly half of the jobs in US and Japan could be lost, in developing countries as many as two-thirds of all jobs of workers could be displaced.” Globally, a recent report by the World Economic Forum suggests that the overall trend of increased automation poses a high risk to international labor markets over the next 10 years that will impact non-manufacturing jobs and displace labour in service sector jobs.
A wave of new research, initiatives and collaboration between governments, companies and civil society partners is seeking to understand and find solutions to the social and economic disruption that AI will bring. We will be watching these carefully, with a particular eye to the interests of women, low-paid workers and developing countries.
3. Personal control of personal data
According to IBM, 90% of the data in the world today has been generated in the last two years alone. A large share of this data is created by us as users: what we click on and ‘like’ or ‘favourite’, what we buy, and where we travel to.
However, often data is collected (either by governments or companies) without our knowledge and without our intervention. For example Google Maps timeline can reveal exactly where we were and when over a period of time. For people living under repressive or populist regimes, indiscriminate data collection presents additional risks, including state exploitation of data for surveillance, tracking our actions and repression. Companies and governments have a responsibility to make sure that terms of service and use agreements protect our data.
The battle seems poised to heat up in 2017 for control over our own personal data and to make more accountable and clear the processes that are collecting, processing and storing it.
4. Algorithmic filters and fake news
Search and social media algorithms (computer models) – often driven by advertising dollars – are creating an internet that shows us what it thinks what we want to see, rather than what we need to see. It’s impossible to see what gets edited out, and algorithms don’t have embedded ethics like news editors, who serve to present diverse points of view. In early December, a BuzzFeed survey found that three in four American adults had believed in headlines that were fake. For this reason, Google decided to end its “In the news” section since fake news stories were being displayed so prominently. Facebook has also begun new measures to address the problem.
Concern is also rising about the ways in which algorithms may entrench discrimination by race, gender or social class when opaquely applied to areas of our lives such as insurance premiums, university admissions, loan applications, policing or prison sentencing.
Actions already being discussed in Germany and the EU could spell a new future of regulations.
The question for 2017: will companies, governments and civil society come together to make sure that better standards are in place to make algorithms more accountable and transparent?
5. Online violence
In the European Union, 1 in 10 women report having experienced cyber-harassment since the age of 15, while our research in 10 Global South cities found that young people were most likely to have suffered harassment online, with over six in 10 women and men aged 18 – 24 saying they had suffered online abuse. This includes aggressive, often sexualised hate speech, direct threats of violence, harassment and “revenge porn”, including using women’s personal and private information or photos for defamation.
Legislation to protect against online violence is lacking in many countries, meaning the web is not a safe space for everyone. Figures suggest that one in five Internet users live in countries where punishment for online violence against them is extremely unlikely. This is the year to convince policymakers and social media companies to pay attention to online threats which may be seen by them as lacking physical or so-called ‘real-world’ consequences.
6. Protecting encryption and anonymity
In 2015, a UN Special Rapporteur opined that encryption and anonymity online were essential to allow people to exercise their fundamental rights to freedom of expression and association. Encryption is also crucial to the stability, resilience and security of the internet as a whole.
But encryption is increasingly under attack, often with high-profile consequences. In Brazil, for instance, courts have ordered Whatsapp to handover encrypted messages at least four times, leading to shutdowns after the company refused to comply. In the US, a high profile legal spat between the FBI and Apple was averted after the FBI managed to hack an iPhone belonging to a terror suspect. The EU has just begun to examine the issue, in a rather opaque way, following pressure from France and Germany to pass an encryption cracking law.
Encryption is perhaps the last firewall for protecting our digital security and right to privacy. Will it fall in 2017?
Follow us on Twitter at @webfoundation to keep up with policy developments throughout the year. To help us build the web we want, you can make a donation.
January 28, 2017
Thanks for the above. You have described the key issues very well.Any solutions will depend on people with the influence to implement solutions. I am wondering: who might they be? At the moment, in England, the most powerful agent involved in data transmission and encryption is probably GCHQ. How can GCHQ be persuaded to help? Would such help be compatible with its objectives? In cases where it's objectives and ours differ, what can be done?