Expatica news

Ethical artificial intelligence: Could Switzerland take the lead?

The debate on contact-tracing highlights the urgency of tackling unregulated technologies like artificial intelligence (AI). With a strong democracy and reputation for first-class research, Switzerland has the potential to be at the forefront of shaping ethical AI.  

What is Artificial Intelligence (AI)? “Artificial intelligence is either the best or the worst thing ever to happen to humanity,” the prominent scientist, Stephen Hawking, who died in 2018, once said.

An expert group set up by the European Commission presented a draft of ethics guidelines for trustworthy AI at the end of 2018, but as of yet there is no agreed global strategy for defining common principles, which would include rules on transparency, privacy protection, fairness, and justice. 

Thanks to its unique features – a strong democracy, its position of neutrality, and world-class research – Switzerland is well positioned to play a leading role in shaping the future of AI that adheres to ethical standards. The Swiss government recognizes the importance of AI to move the country forward, and with that in mind, has been involved in discussions at the international level.

What is AI?

There is no single accepted definition of Artificial Intelligence. Often, it’s divided into two categories, Artificial General Intelligence (AGI) which strives to closely replicate human behaviour while Narrow Artificial Intelligence focuses on single tasks, such as face recognition, automated translations and content recommendations, such as videos on YouTube.

However, on the domestic front, the debate has just begun, albeit in earnest as Switzerland and other nations are confronted with privacy concerns surrounding the use of new technologies like contact-tracing apps, whether they use AI or not, to stop the spread of Covid-19.

The European initiative – the Pan-European Privacy-Preserving Proximity Tracing initiative PEPP-PT – advocated a centralized data approach that raised concern about its transparency and governance. However, it was derailed when a number of nations, including Switzerland, decided in favour of a decentralized and privacy-enhancing system, called DP-3T (Decentralized Privacy-Preserving Proximity Tracing). The final straw for PEPP-PT was when Germany decided to exit as well.

“Europe has engaged in a vigorous and lively debate over the merits of the centralized and decentralized approach to proximity tracing. This debate has been very beneficial as it made the issues aware to a broad population and demonstrated the high level of concern with which these apps are being designed and constructed. People will use the contact-tracing app only if they feel that they don’t have to sacrifice their privacy to get out of isolation,” said Jim Larus. Larus is Dean of the School of Computer and Communication Sciences (IC) at EPFL Lausanne and a member of the group that initially started the DP3T effort at EPFL.

According to a recent survey, nearly two-thirds of Swiss citizens said they were in favour of contact tracing. The DP-3T app is currently being tested on a trial basis, while waiting for the definition of the legal conditions for its widespread use, as decided by the Swiss parliament​. However, the debate highlights the urgency of answering questions surrounding ethics and governance of unregulated technologies.

+ Read more about the controversial Swiss app

The “Swiss way”

Artificial intelligence was included for the first time in the Swiss government’s strategy to create the right conditions to accelerate the digital transformation of society.

Last December, a working group delivered its report to the Federal Council (executive body) called the “Challenges of Artificial Intelligence”. The report stated that Switzerland was ready to exploit the potential of AI, but the authors decided not to specifically highlight the ethical issues and social dimension of AI, focusing instead on various AI use cases and the arising challenges. 

“In Switzerland, the central government does not impose an overarching ethical vision for AI. It would be incompatible with our democratic traditions if the government prescribed this top-down,” Daniel Egloff, Head of Innovation of the State Secretariat for Education, Research and Innovation (SERI) told swissinfo.ch. Egloff added that absolute ethical principles are difficult to establish since they could change from one technological context to another. “An ethical vision for AI is emerging in consultations among national and international stakeholders, including the public, and the government is taking an active role in this debate,” he added.

Seen in a larger context, the government insists it is very involved internationally when it comes to discussions on ethics and human rights. Ambassador Thomas Schneider, Director of International Affairs at the Federal Office of Communications (OFCOM), told swissinfo.ch that Switzerland in this regard “is one of the most active countries in the Council of Europe, in the United Nations and other fora”. He also added that it’s OFCOM’s and the Foreign Ministry’s ambition to turn Geneva into a global centre of technology governance.

Just another buzzword?

How is it possible then to define what’s ethical or unethical when it comes to technology? According to Pascal Kaufmann, neuroscientist and founder of the Mindfire Foundation for human-centric AI, the concept of ethics applied to AI is just another buzzword: “There is a lot of confusion on the meaning of AI. What many call ‘AI’ has little to do with Intelligence and much more with brute force computing. That’s why it makes little sense to talk about ethical AI. In order to be ethical, I suggest to hurry up and create AI for the people rather than for autocratic governments or for large tech companies. Inventing ethical policies doesn’t get us anywhere and will not help us create AI.”

Anna Jobin, a postdoc at the Health Ethics and Policy Lab at the ETH Zurich, doesn’t see it the same way. Based on her research, she believes that ethical considerations should be part of the development of AI: “We cannot treat AI as purely technological and add some ethics at the end, but ethical and social aspects need to be included in the discussion from the beginning.”  Because AI’s impact on our daily lives will only grow, Jobin thinks that citizens need to be engaged in debates on new technologies that use AI and that decisions about AI should include civil society. However, she also recognizes the limits of listing ethical principles if there is a lack of ethical governance. 

For Peter Seele, professor of Business Ethics at USI, the University of Italian-speaking Switzerland, the key to resolving these issues is to place business, ethics, and law on an equal footing. “Businesses are attracted by regulations. They need a legal framework to prosper. Good laws that align business and ethics create the ideal environment for all actors,” he said. The challenge is to find a balance between the three pillars.

The perfect combination

Even if the Swiss approach mainly relies on self-regulation, Seele argues that establishing a legal framework would give a significant impulse to the economy and society.

If Switzerland were to take a lead role in defining ethical standards, its political system based on direct democracy and democratically controlled cooperatives could play a central role in laying the foundation for the democratization of AI and the personal data economy. As the Swiss Academy of Engineering Sciences SATW suggested in a whitepaper at the end of 2019, the model for that could be the Swiss MIDATA, a nonprofit cooperative that ensures citizens’ sovereignty over the use of their data, acting as a trustee for data collection. Owners of a data account can become members of MIDATA, participating in the democratic governance of the cooperative. They can also allow selective access to their personal data for clinical studies and medical research purposes.

The emergence of an open data ecosystem fostering the participation of civil society is raising awareness of the implications of the use of personal data, especially for health reasons, as in the case of the contact-tracing app. Even if it’s argued that the favoured decentralized system does a better job preserving fundamental rights than a centralized approach, there are concerns about susceptibility to cyber attacks.

The creation of a legal basis for AI could ignite a public debate on the validity and ethics of digital systems.