Тёмный

AIM: The OECD AI Incidents Monitor explained by Marko Grobelnik 

OECD. AI
Подписаться 623
Просмотров 600
50% 1

Marko Grobelnik, researcher at Jozef Stefan Institute's AI Lab and Co-chair of the OECD AI Expert Group on AI Incidents, explains how AIM, The OECD Incidents Monitor, works and why policymakers need a dynamic evidence base like this to help manage AI's risks and benefits.
While AI provides tremendous benefits, some uses of AI produce dangerous results that can harm individuals, businesses, and societies. These negative outcomes, captured under the umbrella term “AI incidents”, are diverse in nature and happen across sectors and industries.
To give a few examples, some algorithms incorporate biases that discriminate against people for their gender, race or socioeconomic condition. Others manipulate individuals by influencing their choices for what to believe or how to vote. On a different level but just as critical, some skilled jobs are entrusted to AI, increasing unemployment in some sectors and causing harm to individuals and professions.
These issues are known and reported ad hoc at this stage without a consistent method. With the little data that exist, we can see that incident reports are increasing rapidly. The most visible are incidents reported in the media, and they are only the tip of the iceberg.
AI actors need to identify and document these incidents in ways that do not hinder AI’s positive outcomes. Failure to address these risks responsibly and quickly will exacerbate the lack of trust in our societies and could push our democracies to the breaking point. Importantly, data about past incidents can help us to prevent similar incidents from happening in the future.
What is an “AI incident”?
Employment risks, bias, discrimination and social disruption are obvious harms, but what do they have in common? What about the hazards, less obvious or even precursor events that could lead to harm? If experts worldwide are going to track and monitor incidents, the first thing to do is agree upon what an AI incident is.
This week, the OECD published a stocktaking paper as a first step towards a common understanding of AI incidents. The OECD.AI Expert Group on AI Incidents brings together experts from policy, academia, the technical community and important initiatives like the Responsible AI Collaborative. It is currently developing a shared terminology that will become the foundation for reporting incidents in a compatible way across jurisdictions. From there, governments and other actors can establish an international framework for reporting incidents and hazards.
AIM to evolve into a global AI incidents monitor for standardised AI incident reporting
In the long run, the AI Incident tracker will evolve into a tool pooling standardised information about AI incidents and controversies worldwide.
Current work to establish a common framework for incident reporting will become part of AIM to ensure consistency and global interoperability. Developers will be able to link incidents to existing tools in the Catalogue of Tools & Metrics for Trustworthy AI, helping avoid similar incidents and allowing all actors to avoid reproducing known incidents across jurisdictions for more trustworthy AI.

Наука

Опубликовано:

 

13 ноя 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии    
Далее
What Creates Consciousness?
45:45
Просмотров 83 тыс.
Laurent Daudet at OECD.AI Expert Forum
6:00
iPhone socket cleaning #Fixit
0:30
Просмотров 14 млн
Красиво, но телефон жаль
0:32
Просмотров 1,3 млн
Samsung laughing on iPhone #techbyakram
0:12
Просмотров 642 тыс.