Learn from a specific use case to create your responsible AI strategy
CONCREATE STEPS TOWARDS
UNRAVELLING RESPONSIBLE
USES OF AI SYSTEMS
AI can affect people in ethically relevant ways.
Humans, not machines, take responsibility for their development and deployment.
HUMAN is a tangible method to unravel AI in a responsible way.
HUMAN aligns bottom-up learnings with top-down frameworks.
Create a responsible AI strategy grounded in practice.
Test your responsible AI strategy by engaging with a specific use case
HUMAN helps you live up to the responsibility that comes with using technology.
Do you want to work towards responsibly unravelling AI systems as well?
Use this toolbox and its 3 modules.
1. Involve your stakeholders
Conduct a Panel Assessment to understand the needs of people who are affected by or influence the system or its outputs.
Short version in 2 Steps:
Stakeholders:
Who is directly or indirectly affected by your AI system?
Involvement:
How do you plan to involve a diverse set of stakeholders when you develop, deploy and evaluate your system?
2. Evaluate ethical impact
Conduct an Impact Assessment and make its result transparent to detect and minimize risks.
Short version in 3 Steps:
Objectives:
Which problem are you trying to solve and which objectives are you trying to achieve? How does deploying the system help you do that?
Ethical Impact:
Considering the context and the task for which your system is used, what ethically relevant impact could it have on stakeholders?
Evaluation:
How do you evaluate whether you achieve your objectives and responsibly address ethical impact? Who is responsible for doing so?
opensource and freely available.
Unravel responsible uses of AI – get in touch to become part of our network!
If you would like to be supported, add the third module:
3. Get support from experts
- We help you to create a responsible AI strategy.
- We help you align this strategy with your culture.
→ Get in touch with Intersections and AlgorithmWatch CH.
Statement by adopter
HUMAN supports the ethical reflection and efficiently points out weak points
Svenja Herzog from the Canton of Basel is sharing her motivation, experience and recommendations on Responsible AI.
HUMAN adopters
The following applications have been tested:

Avatar in public television

HR Assistance GenAI Chatbot

Citizen Assistance GenAI Chatbot
What and who is behind HUMAN?
HUMAN is a tangible method towards responsible uses of algorithms and AI. It combines a panel assessment, involving a diverse set of stakeholders, with an impact assessment, requiring deployers to reflect on objectives, measures, and results around ethical requirements.
Intersections empowers people and organisations with knowledge and tools to make the AI transformation safe, responsible and effective. https://intersections.ch/
AlgorithmWatch CH is a non-governmental, non-profit organization based in Zurich and Berlin. They stand up for a world where algorithms and Artificial Intelligence do not weaken justice, democracy, human rights and sustainability, but strengthen them. https://algorithmwatch.ch
HUMAN Team
These people supported the development of our method: Nikki Böhler and Nathalie Klauser from Intersections in collaboration with Angela Müller and Michele Loi from AlgorithmWatch CH.

Nikki Böhler

Nathalie Klauser

Angela Müller

Michele Loi
A special thank to our Advisors: Bienefeld, Nadine, Private Lecturer ETH/ Kirchschläger, Peter, Professor University of Lucerne/ Nauer, Céline, Code Excursion/ Paeffgen, Niniane, GESDA/ Pierri, Paola, Professor HKB/ Rochel, Johan, EPFL/ Schouten, Afke, HWZ/ Strub, Jean-Daniel, Ethix
HUMAN was supported by the Mercator Foundation Switzerland.