We are BABL AI, a boutique consulting and audit firm focused on responsible AI. We believe that algorithms should be developed, deployed, and governed in ways that prioritize human flourishing.
We unlock the value of responsible AI for clients by combining leading research expertise and extensive practitioner experience in AI and organizational ethics to drive impactful change at the frontier of technology and emerging standards. Our team consists of leading experts and practitioners on AI, Ethics, Law, and Machine Learning.
Thanks, very helpful. One question - when you do proxy testing, you are bound to find some correlation with protected characteristics (even if small). How do you determine what level of correlation is regarded as problematic?
That's the million dollar question! In the NYS law, the guidance is minimal in this regard: "Whether ECDIS correlates with a protected class may be determined using data available to the insurer or may be reasonably inferred using accepted statistical methodologies." However, correlations between variables exist on a continuum, so companies will the to find an external reference, ideally from the insurance industry. This resource is pretty good, but they focus much more on the disparate impact testing (citing the 4/5ths rule) as opposed to correlation thresholds. www.actuary.org/sites/default/files/2023-08/risk-brief-discrimination.pdf In short, the company gets to decide as long as they have a credible reference.
Thanks for a great discussion. The EU AI Act mandates certain aspirational things that are impossible (for example, no bias). Clearly, some bias is inevitable, even if it is less than the human process it replaces. As a provider, how do you demonstrate you have taken sufficient measures to avoid bias?
I don't think the Act will actually require the removal of all bias, but it will need to be a significant consideration that providers must consider, both in their risk management system (Article 9) and their technical testing (e.g., Article 15).
I stumbled upon your video while searching on the topic of AI Ethics and Governance. I want to start my own consultancy in this space at this stage of my life, and your information is priceless.
Great delivery of information there on AI risk and impact assessment. As you noted in your presentation, is it possible to get a copy of the cheat sheet on the things you talked about? By the way, you got a new suscriber here.
Interesting discussion on the importance of trust in business. Also it's eye-opening to see the significant gap between business leaders and consumers on the perceptions of trust, I hope more efforts shall be made to close this gap.
Thanks! I'm pursuing a similar career. Do you know what is the best way to stay updated with AI developments? Btw ChatGPT linked this video to me hahah
Good insight here Shea. This makes me wonder about the self identification questions often posed at the end of online job applications. Does Human Resources filter out, or remove candidates, at the top of the funnel based on a percentage of how people answer the “voluntary/self identification” questions: 1) Gender 2) Hispanic/Latino 3) veteran status and 4) disability? In other words, once they get, lets say, 100 male candidates, that bucket might close, and so forth?
I'd be curious to join the industry. Right now I'm trying to become a developer, but I've spent the last 12 months in my life learning how to prompt inject. Learning about how AI systems work. Let me tell you it is not easy nor is it simple, I'm curious where should I reach out if I was wanting to apply
How is the industry market in terms of revenue of setting up a consulting firm which specialises in AI Risk management based on laws governing various regions across the globe, like NIST is US, EU have there own and so on...
You are amazing! you should have more subscribers. Thank you for the nuggets of wisdom. First video on ai ethics and governance that showcases the practical way to practice it and be it! thank you
Thank you for sharing your knowledge! What methodologies can you use for doing fundamental rights or human rights impact assessments? Is it the framework described in one of your publications, "A Framework for Assurance Audits of Algorithmic Systems"? I'm also aware of others, such as the HRIA methodology for digital activities from the Danish Institute of Human Rights. Are they comparable?
It already sounds quite a lot. I would say that split should be 85% dependency on the developers and 15% on the users, it might differ if you have a computer vision no code platform to train whatever you want, but that would be much easier to ban risky use cases (i.e. face recognition) on the dev side!
The NIST AI RMF does have quite a bit of overlap with ISO 42001, in that many elements of the Govern, Map, Measure, and Manage function can be mapped onto ISO 42001 controls. However, it's not a perfect mapping, and NIST is both more high-level and very specific at the same time. For example, these Generative AI guidelines that NIST released are not present at all in ISO 42001.
Could you elaborate on who is obligated to carry out a FRIA? Chapter 3, section 3, article 27 says "deployers that are bodies governed by public law, or are private entities providing public services, and deployers high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights" but section 3 itself is named "Obligations of providers and deployers of high-risk AI systems and other parties" also mentioning providers. In another episode, you mentioned that carrying out a FRIA shows the commitment of the enterprise to its customers, building mutual trust and early differentiating from competitors that choose to postpone this process. I quote: "Now's the time to do that because pretty soon everybody's going to have to do this and you're just going to be one among a sea of people who are only meeting the floor of that regulation". Do you imply that in the future a FRIA will be mandatory for every business that implements AI systems or solely for businesses that implement High-Risk AI systems? Thanks for the great content by the way!
Most businesses will not need to conduct a FRIA as it's written in the law, only public orginisations and certain other private companies offering public services and credit scores and insurance. However, the risk assessment process as outlined in Article 9 is not dissimilar from a FRIA, so Providers of high-risk AI systems will effectively be completing assessments that have to consider the fundamental rights.
I am just a novice. I need to find an AI Governance platform. It will be used for collaboration among sports related content creation. I could use help finding a basic AI Governance platform. Fact sheet generation will be important. Thank you.
Now that the EU AI Act has passed, I think you'll find companies will be looking for help in preparing and maintaining their AI governance and risk management... AI ethics is an important part of that. The key will be to find a way to get yourself noticed, and to specialize in a particular niche that companies need. For this, talking with as many potential clients will be the best method.
I work for an executive consulting firm partnering with candidates to place C-level and -1 level leaders for some pretty large companies. How do you think that translates into AI ethics? My role is to assess HR leaders, so I don't actually work in HR. More so recruiting. Would love to get into ethical AI work but don't know how to make the connection. Any ideas are welcome!
Most of our work currently is in the HR space, as this will be where AI is making a lot of impact, and regulations are focusing their efforts. The connection is easy to make, and it probably just starts with you reading and speaking/posting about it on social media (LinkedIn is great for this). Adding some courses in this area can't hurt to build credibility; shameless plug for our courses here :) babl.ai/courses/ and "Lunchtime BABLing" listeners can always save 20% off all our online courses using coupon code "BABLING"
From another vide about this I heard that deepfakes (they called it audio and video altering porgrams) won't be considered high-risk hence they will be left unregulated. Like, WTF? Is this a joke?
Thank you immensely. Your videos are really helping me, I plan to watch them chronologically. I am seeking to embark on a career in AI ethics consulting. A simple question, what if I am good in some of the skills you mentioned but I have no certifications?
Greetings from the UK. I am about to take a short AI course so this discussion was useful. I am also looking at the AI and Algorithm Auditor Certification Program. Will the material in the auditor course allow me to operate in the UK and EU?
The field is currently unregulated, so no formal certification is required to operate anywhere. However, we are in close contact with UK and EU regulators about these issues. See, e.g.: www.gov.uk/ai-assurance-techniques/babl-ai-conducting-third-party-audits-for-automated-employment-decision-tools and github.com/algorithmicbiaslab/public-resources/blob/main/policy/eu/eu-comm_dsa_2023-11-20.pdf
Can you please share more details. what is the format of the course Recorded session or Live training? what is the exam pattern? Can we take course in our own pace etc..
Great questions. The lectures are recorded, but we have Q&A over Zoom most weeks, a student community (Slack), and students take the courses at their own pace. Exams are once per month. More information can be found here: courses.babl.ai/p/ai-and-algorithm-auditor-certification
Not at the moment, though we're working on socializing it in both the EU and US. We are recognized by the newly formed International Association of Algorithmic Auditors (Shea is one of the founding members). We talk about it here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-hNQ-j6NjQug.html
I've been trying to find a way to combine my non-traditional background and skills to contribute to this field, but didn't really know where to start until I found this and your other videos. Really insightful. Just subscribed. Thank you!