Data

Nordic Ethical AI Landscape

The purpose of the Nordic Ethical AI Landscape project is to create an overview showcasing Nordic companies specialized in ethical AI and responsible use of data. The landscape addresses the question of how Nordic businesses can best position themselves at the forefront of legislative, technical and ethical developments in order to become leading in the AI & Data field.

A quality stamp for Nordic companies

The landscape is a quality stamp for the selected companies, and they can use it in their own branding and communication.

The Landscape can be used by companies looking to improve the ethics of their AI solutions, while investors and other stakeholders can draw upon it as a knowledge repository to find potential investment targets and collaborators in their own AI journey. The landscape can also function as an information tool in sourcing new collaboration partners, potential customers, subcontractors and investment prospects as well as a knowledge repository. 

The Nordic Ethical AI Landscape 2022

The following map shows the members of the current Nordic Ethical AI Landscape. The companies are selected based on a range of criteria related to ethical use of AI. If you want to know more about the specific criteria and what it takes to become a part of the landscape, please contact us at: olivia.rekman@nordicinnovation.org or A.Sunnanmark@nordicinnovation.org.

Members of the Nordic Ethical AI Landscape Q4 2022

The following startups are currently members of the Nordic Ethical AI Landscape.

OrganizationSubcategory
2021 AIModelOps and Monitoring
AbzuTargeted AI Solutions and Technologies
Anch AIAI Audits and GRC
Develop DiverseTargeted AI Solutions and Technologies
ExparangTargeted AI Solutions and Technologies
Layke AnalyticsTargeted AI Solutions and Technologies
SaidotAI Audits and GRC
Silo AITargeted AI Solutions and Technologies
TengaiTargeted AI Solutions and Technologies
Veil AIData for AI
HeadaiTargeted AI Solutions and Technologies
IndivdTargeted AI Solutions and Technologies
LinkAI ABData for AI
Repli5Data for AI
Syndata ABData for AI
SyntheticAIDataData for AI
UnbiasedData for AI
Valossa AITargeted AI Solutions and Technologies
VerAI ABAI Audits and GRC
AdledeTargeted AI Solutions and Technologies

The following consultancy firms are currently members of the Nordic Ethical AI Landscape.

Organization
Siili
Dain Studios
Cuelebre
Bui Consulting

The following institutions are currently members of the Nordic Ethical AI Landscape.

Organization
Dataethics
Aalto University
Icelandic Institute for Intelligent Machines
NordSTAR (Oslo Metropolitan University)
Norwegian Council for Digital Ethics
UMEA University
Finnish Center for Artificial Intelligence (FCAI)
Danish Data Ethics Council
Swedish AI Ethics Lab
Lund University
University of Helsinki
Technical Research Center Finland (VTT)

Is your company missing?

The Nordic Ethical AI Landscape will be updated on a regular basis. If you represent a company or an institution working to foster the application of ethical AI – or know such a company or institution – please fill in the form on EAIDB’s website for submission.

Project partners

The Nordic Ethical AI Landscape is developed in collaboration with Ethical AI Database (EAIDB) and Ethical AI Governance Group (EAIGG).

If you want to know more about the Nordic Ethical AI Landscape and the methodology behind it, please have a look at EAIDB.

Background

The Nordic Ethical AI Landscape is funded under the AI and Data program which is one of the eight initiatives launched by the five Nordic Ministers of Trade and Industry.

Our aim is that the landscape can be used as a platform for matching companies and organizations within the Nordics, to help them on their ethical AI journey. In this way, we hope to propel ethical AI capabilities in Nordic companiesto help to foster a competitive advantagewhich is needed to meet the Nordic Council of Ministers’ 2030 vision in AI & Data to become world leading in digitalization, ethical AI and responsible use of AI.

Frequently asked questions

Why does ethical AI matter?

“When AI breaks, it breaks violently.” There are an infinite number of examples on the internet, of 
technology going wrong. The “AI risk” is a hazard not only to the company developing the technology, but to all stakeholders involved, not least to the end customer or citizen using the service. For investors, founders, and enterprises, minimizing AI risk is always profitable. Furthermore, it is the responsibility of the technology developer to make sure it does not amplify inequality in society. Technology should be strictly moving society in a positive direction and should work for all users equally. 

Is ethical AI still relevant in a trust based society as the Nordics?

Absolutely. AI that is not properly governed, documented, controlled, and observed is volatile AI that can break at any time. The Nordics may have more of an incentive than other countries to monitor their AI, but developing ethical AI still requires monetary and temporal investment of some kind - which many companies from startups to corporations are not always willing to give. 

How do you define ethical AI?

We define an “ethical AI company” as one that either offers services to help other companies make 
their AI systems more transparent, fair, trustworthy, and responsible or one that has developed 
technology that directly improves an area of human society that has previously been relatively 
unethical (e.g., toxic content moderation algorithms to improve internet safety, better facial 
recognition algorithms with less bias against darker facial tones, financial services companies 
improving existing lending algorithms). 

What about climate change and cyber security?

There is no denying the importance of startups fighting climate change or protecting vital information. However, we differentiate between startups that actively use ethical approaches / algorithms and startups that use regular AI to solve a social issue or environmental problem. For example, a company uses AI to quantify and track an organization’s climate risk, but the technology being used is not substantially more fair, transparent, ethical, or responsible than the industry average. Cybersecurity is an enormous space simply by itself - including it in this project would dilute the other smaller (but equally relevant) categories. 

How were the different categories derived?

The categories were created by studying the machine learning pipeline and how data flows through it. It begins at the data level (where the data must be observed and debiased), travels to the model (which also must be debiased and explained), then outputs results (which need to be interpreted and applied). The Governance Risk and Compliance (GRC) category comes from the idea that every machine learning pipeline needs to be audited, governed, and checked every step on the way to manage risk and enforce compliance. The global database (EAIDB) includes an open-source category, but this is replaced in the Nordic Landscape by institutions instead. Most consulting firms fall under the GRC category as well but are broken out separately in the Nordic maps to bring more focus to them. 

What does the vetting process looks like?

Vetting is done primarily through the company website as well as secondary articles, media, or 
research papers. Generally, companies deserving the title “ethical AI” are the first to advertise 
themselves as such since ethics is at the forefront of their overall approach. Some other key terms are “fairness / bias monitoring,” “transparency and trust,” etc. An understanding of the underlying 
technology is sometimes required in the case of startups with whitepapers or research explaining their methods. Overall, it is fairly obvious when a company does in fact belong in the list. However, this approach is not perfect - there are many companies that identify themselves as “ethical”, even when their technology is just average. A more thorough process would involve demos of each and every product, an endeavor that EAIDB has already begun but not yet finished.