CONCREATE STEPS TOWARDS
UNRAVELLING RESPONSIBLE
USES OF AI SYSTEMS

AI can affect people in ethically relevant ways.

Humans, not machines, take responsibility for their development and deployment.

HUMAN is a tangible method to unravel AI in a responsible way.


HUMAN aligns bottom-up learnings with top-down frameworks.

Create a responsible AI strategy grounded in practice.


Do you want to work towards responsibly unravelling AI systems as well?

Use this toolbox and its 3 modules.

1. Involve your stakeholders

Conduct a Panel Assessment to understand the needs of people who are affected by or influence the system or its outputs.

 

Short version in 2 Steps:

1/4

Stakeholders:

 

Who is directly or indirectly affected by your AI system?

2/4

Involvement:

 

How do you plan to involve a diverse set of stakeholders when you develop, deploy and evaluate your system?

3/4

Want to learn how?

 

Here’s the LINK to the Panel Assessment Method.

4/4

2. Evaluate ethical impact

Conduct an Impact Assessment and make its result transparent to detect and minimize risks.

 

 

Short version in 3 Steps:

1/5

Objectives:

 

Which problem are you trying to solve and which objectives are you trying to achieve? How does deploying the system help you do that?

2/5

Ethical Impact:

 

Considering the context and the task for which your system is used, what ethically relevant impact could it have on stakeholders?

3/5

Evaluation:

How do you evaluate whether you achieve your objectives and responsibly address ethical impact? Who is responsible for doing so?

 

4/5

Want to learn how?

 

Here’s the LINK to the Impact Assessment Method.

5/5

opensource and freely available.

Unravel responsible uses of AI – get in touch to become part of our network!

If you would like to be supported, add the third module

3. Get support from experts

  • We help you to create a responsible AI strategy.
  • We help you align this strategy with your culture.
1/2

→ Get in touch with Intersections and AlgorithmWatch CH.

2/2

Statement by adopter

HUMAN supports the ethical reflection and efficiently points out weak points

Svenja Herzog from the Canton of Basel is sharing her motivation, experience and recommendations on Responsible AI.

HUMAN adopters 

The following applications have been tested: 

What and who is behind HUMAN?

HUMAN is a tangible method towards responsible uses of algorithms and AI. It combines a panel assessment, involving a diverse set of stakeholders, with an impact assessment, requiring deployers to reflect on objectives, measures, and results around ethical requirements.

Intersections empowers people and organisations with knowledge and tools to make the AI transformation safe, responsible and effective. https://intersections.ch/ 

AlgorithmWatch CH is a non-governmental, non-profit organization based in Zurich and Berlin. They stand up for a world where algorithms and Artificial Intelligence do not weaken justice, democracy, human rights and sustainability, but strengthen them. https://algorithmwatch.ch 

HUMAN Team 

These people supported the development of our method: Nikki Böhler and Nathalie Klauser from Intersections in collaboration with Angela Müller and Michele Loi from AlgorithmWatch CH. 

A special thank to our Advisors: Bienefeld, Nadine, Private Lecturer ETH/ Kirchschläger, Peter, Professor University of Lucerne/ Nauer, Céline, Code Excursion/ Paeffgen, Niniane, GESDA/ Pierri, Paola, Professor HKB/ Rochel, Johan, EPFL/ Schouten, Afke, HWZ/ Strub, Jean-Daniel, Ethix

HUMAN was supported by the Mercator Foundation Switzerland.