ConferenceCall 2024 05 01: Difference between revisions
Ontolog Forum
No edit summary |
|||
(2 intermediate revisions by the same user not shown) | |||
Line 21: | Line 21: | ||
== Agenda == | == Agenda == | ||
* '''[[KenBaclawski|Ken | * Introduction to the Risk Panel [https://bit.ly/4bmqhAw Video Recording] | ||
* '''[[KenBaclawski|Ken Baclawski]]''' Overview of Risks and Ethics [https://bit.ly/3WnUZoJ Slides] [https://bit.ly/3JMp8Xv Video Recording] | |||
** Chair, Ontolog Board of Trustees | ** Chair, Ontolog Board of Trustees | ||
* '''Gabriella Waters''' | * '''Gabriella Waters''' | ||
** Gabriella Waters is an artificial intelligence and machine learning researcher on the ARIA team at NIST where she coordinates AI testing and evaluation across three teams. She is also the Director of Operations and the director of the Cognitive & Neurodiversity AI (CoNA) Lab at the Center for Equitable AI & Machine Learning Systems at Morgan State University in Baltimore, MD. She serves as the AI advisor and principal AI scientist at the Propel Center, where she is also a professor of Culturally Relevant AI/ML Systems. She is passionate about increasing the diversity of thought around technology and focuses on interdisciplinary collaborations to drive innovation, equity, Explainability, transparency, and ethics in the development and application of AI tools. In her research, Gabriella is interested in studying the intersections between human neurobiology & learning, quantifying ethics & equity in AI/ML systems, neuro-symbolic architectures, and intelligent systems that make use of those foundations for improved human-computer synergy. She develops technology innovations, with an emphasis on support for neurodiverse populations. | ** Gabriella Waters is an artificial intelligence and machine learning researcher on the ARIA team at NIST where she coordinates AI testing and evaluation across three teams. She is also the Director of Operations and the director of the Cognitive & Neurodiversity AI (CoNA) Lab at the Center for Equitable AI & Machine Learning Systems at Morgan State University in Baltimore, MD. She serves as the AI advisor and principal AI scientist at the Propel Center, where she is also a professor of Culturally Relevant AI/ML Systems. She is passionate about increasing the diversity of thought around technology and focuses on interdisciplinary collaborations to drive innovation, equity, Explainability, transparency, and ethics in the development and application of AI tools. In her research, Gabriella is interested in studying the intersections between human neurobiology & learning, quantifying ethics & equity in AI/ML systems, neuro-symbolic architectures, and intelligent systems that make use of those foundations for improved human-computer synergy. She develops technology innovations, with an emphasis on support for neurodiverse populations. | ||
** [https://standards.ieee.org/ieee/3396/11379/ Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility] | |||
** [https://bit.ly/3UPMthl Audio Recording] | |||
* '''[[MikeBennett|Mike Bennett]]''' | * '''[[MikeBennett|Mike Bennett]]''' | ||
** Ontolog Board of Trustees and OMG | ** Ontolog Board of Trustees and OMG | ||
** [https://bit.ly/3ULMDGm Video Recording] | |||
* Discussion [https://bit.ly/3WwCoaa Video Recording] | |||
== Conference Call Information == | == Conference Call Information == | ||
Line 41: | Line 46: | ||
== Resources == | == Resources == | ||
* [https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF#:~:text=The%20AI%20Risk%20Management%20Framework,products%2C%20services%2C%20and%20systems NIST AI Risk Management Framework] | |||
* [https://www.nccoe.nist.gov/get-involved/attend-events/nccoe-learning-series-overview-nccoes-dioptra-aiml-test-platform NCCoE Learning Series: Overview of the NCCoE's Dioptra—An AI/ML Test Platform] | |||
* [https://www.iso.org/obp/ui/en/#iso:std:iso-iec:tr:24027:ed-1:v1:en ISO/IEC TR 24027:2021(en) Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making] | |||
* [https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0 ISO/IEC JTC 1. 2024. Standards by ISO/IEC JTC 1/SC 42 Artificial intelligence] | |||
* [https://www.knowledgegraph.tech/ KG Conference Website] | |||
* [https://standards.ieee.org/ieee/3396/11379/ Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility] | |||
== Previous Meetings == | == Previous Meetings == |
Latest revision as of 01:18, 7 May 2024
Session | Risks and Ethics |
---|---|
Duration | 1 hour |
Date/Time | 01 May 2024 16:00 GMT |
9:00am PDT/12:00pm EDT | |
4:00pm GMT/6:00pm CEST | |
Convener | Ken Baclawski |
Ontology Summit 2024 Risks and Ethics
Agenda
- Introduction to the Risk Panel Video Recording
- Ken Baclawski Overview of Risks and Ethics Slides Video Recording
- Chair, Ontolog Board of Trustees
- Gabriella Waters
- Gabriella Waters is an artificial intelligence and machine learning researcher on the ARIA team at NIST where she coordinates AI testing and evaluation across three teams. She is also the Director of Operations and the director of the Cognitive & Neurodiversity AI (CoNA) Lab at the Center for Equitable AI & Machine Learning Systems at Morgan State University in Baltimore, MD. She serves as the AI advisor and principal AI scientist at the Propel Center, where she is also a professor of Culturally Relevant AI/ML Systems. She is passionate about increasing the diversity of thought around technology and focuses on interdisciplinary collaborations to drive innovation, equity, Explainability, transparency, and ethics in the development and application of AI tools. In her research, Gabriella is interested in studying the intersections between human neurobiology & learning, quantifying ethics & equity in AI/ML systems, neuro-symbolic architectures, and intelligent systems that make use of those foundations for improved human-computer synergy. She develops technology innovations, with an emphasis on support for neurodiverse populations.
- Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility
- Audio Recording
- Mike Bennett
- Ontolog Board of Trustees and OMG
- Video Recording
- Discussion Video Recording
Conference Call Information
- Date: Wednesday, 1 May 2024
- Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
- ref: World Clock
- Expected Call Duration: 1 hour
- Video Conference URL: https://bit.ly/48lM0Ik
- Conference ID: 876 3045 3240
- Passcode: 464312
The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09
Participants
Discussion
Resources
- NIST AI Risk Management Framework
- NCCoE Learning Series: Overview of the NCCoE's Dioptra—An AI/ML Test Platform
- ISO/IEC TR 24027:2021(en) Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making
- ISO/IEC JTC 1. 2024. Standards by ISO/IEC JTC 1/SC 42 Artificial intelligence
- KG Conference Website
- Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility
Previous Meetings
Session | |
---|---|
ConferenceCall 2024 04 24 | Applications |
ConferenceCall 2024 04 17 | Applications |
ConferenceCall 2024 04 10 | Synthesis |
... further results |
Next Meetings
Session | |
---|---|
ConferenceCall 2024 05 08 | Risks and Ethics |
ConferenceCall 2024 05 15 | Synthesis |
ConferenceCall 2024 05 22 | Communiqué |