Montréal Declaration on Responsible AI

Full Text Sharing

https://montrealdeclaration-responsibleai.com/

On November 3, 2017, the Université de Montréal launched the co-construction process for the Montréal Declaration for a Responsible Development of Artificial Intelligence (Montréal Declaration) . A year later, the results of these citizen deliberations are public. Dozens of events were organized to stimulate discussion on social issues that arise with artificial intelligence (AI), and 15 deliberation workshops were held over three months, involving over 500 citizens, experts and stakeholders from all backgrounds.

 

The selected citizen co-construction method is based on a preliminary declaration of general ethical principles structured around seven (7) fundamental values: well-being, autonomy, justice, privacy, knowledge, democracy and responsibility. Following the process, the Declaration was enriched and now presents 10 principles based on the following values: well-being, autonomy, intimacy and privacy, solidarity, democracy, equity, inclusion, caution, responsibility and environmental sustainability.

 

The Montreal Declaration for responsible AI development has
three main objectives:

  1. Develop an ethical framework for the development and deployment of AI;
  2. Guide the digital transition so everyone benefits from this technological revolution;
  3. Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development.

 

Principles

The Declaration’s first objective consists of identifying the ethical principles and values that promote the fundamental interests of people and groups. These principles applied to the digital and artificial intelligence field remain general and abstract. To read them correctly, it is important to keep the following points in mind:

  • Although they are presented as a list, there is no hierarchy. The last principle is not less important than the first. However, it is possible, depending on the circumstances, to lend more weight to one principle than another, or to consider one principle more relevant than another.
  • Although they are diverse, they must be interpreted consistently to prevent any conflict that could prevent them from being applied. As a general rule, the limits of one principle’s application are defined by another principle’s field of application.
  • Although they reflect the moral and political culture of the society in which they were developed, they provide the basis for an intercultural and international dialogue.
  • Although they can be interpreted in different ways, they cannot be interpreted in just any way. It is imperative that the interpretation be coherent.
  • Although these are ethical principles, they can be translated into political language and interpreted in legal fashion.

Recommendations were made based on these principles to establish guidelines for the digital transition within the Declaration’s ethical framework. It aims at covering a few key cross-sectorial themes to reflect on the transition towards a society in which AI helps promote the common good: algorithmic governance, digital literacy, digital inclusion of diversity and ecological sustainability.

Our process for responsible artificial intelligence

The Montreal Declaration for a Responsible Development of Artificial Intelligence is based on a declaration of ethical principles built around 7 core values: well-being, autonomy, justice, privacy, knowledge, democracy and responsibility. These values, suggested by a group of ethics, law, public policy and artificial intelligence experts, have been informed by a deliberation process. This deliberation occurred through consultations held over three months, in 15 different public spaces, and sparked exchanges between over 500 citizens, experts and stakeholders from every horizon.

 

Position: Co -Founder of ENGAGE,a new social venture for the promotion of volunteerism and service and Ideator of Sharing4Good