Actions

ConferenceCall 2024 05 01: Difference between revisions

Ontolog Forum

 
(4 intermediate revisions by the same user not shown)
Line 15: Line 15:
|-
|-
! scope="row" | Convener
! scope="row" | Convener
| [[convener::RamSriram|Ram D. Sriram]]
| [[convener::KenBaclawski|Ken Baclawski]]
|}
|}


Line 21: Line 21:


== Agenda ==
== Agenda ==
* '''[[RamSriram|Ram D. Sriram]]''' ''Risk Panel''
* Introduction to the Risk Panel [https://bit.ly/4bmqhAw Video Recording]
Panelists include:
* '''[[KenBaclawski|Ken Baclawski]]''' Overview of Risks and Ethics [https://bit.ly/3WnUZoJ Slides] [https://bit.ly/3JMp8Xv Video Recording]
* '''[[KenBaclawski|Ken Baclwaski]]''' Overview of Risks and Ethics
** Chair, Ontolog Board of Trustees
** Chair, Ontolog Board of Trustees
* '''Gabriella Waters'''
* '''Gabriella Waters'''
** Gabriella Waters is an artificial intelligence and machine learning researcher on the ARIA team at NIST where she coordinates AI testing and evaluation across three teams. She is also the Director of Operations and the director of the Cognitive & Neurodiversity AI (CoNA) Lab at the Center for Equitable AI & Machine Learning Systems at Morgan State University in Baltimore, MD. She serves as the AI advisor and principal AI scientist at the Propel Center, where she is also a professor of Culturally Relevant AI/ML Systems.  She is passionate about increasing the diversity of thought around technology and focuses on interdisciplinary collaborations to drive innovation, equity, Explainability, transparency, and ethics in the development and application of AI tools. In her research, Gabriella is interested in studying the intersections between human neurobiology & learning, quantifying ethics & equity in AI/ML systems, neuro-symbolic architectures, and intelligent systems that make use of those foundations for improved human-computer synergy. She develops technology innovations, with an emphasis on support for neurodiverse populations.
** Gabriella Waters is an artificial intelligence and machine learning researcher on the ARIA team at NIST where she coordinates AI testing and evaluation across three teams. She is also the Director of Operations and the director of the Cognitive & Neurodiversity AI (CoNA) Lab at the Center for Equitable AI & Machine Learning Systems at Morgan State University in Baltimore, MD. She serves as the AI advisor and principal AI scientist at the Propel Center, where she is also a professor of Culturally Relevant AI/ML Systems.  She is passionate about increasing the diversity of thought around technology and focuses on interdisciplinary collaborations to drive innovation, equity, Explainability, transparency, and ethics in the development and application of AI tools. In her research, Gabriella is interested in studying the intersections between human neurobiology & learning, quantifying ethics & equity in AI/ML systems, neuro-symbolic architectures, and intelligent systems that make use of those foundations for improved human-computer synergy. She develops technology innovations, with an emphasis on support for neurodiverse populations.
* '''Ramayya Krishnan'''
** [https://standards.ieee.org/ieee/3396/11379/ Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility]
** Carnegie Mellon University (CMU)
** [https://bit.ly/3UPMthl Audio Recording]
* '''[[MikeBennett|Mike Bennett]]'''
* '''[[MikeBennett|Mike Bennett]]'''
** Ontolog Board of Trustees and OMG
** Ontolog Board of Trustees and OMG
** [https://bit.ly/3ULMDGm Video Recording]
* Discussion [https://bit.ly/3WwCoaa Video Recording]


== Conference Call Information ==
== Conference Call Information ==
Line 45: Line 46:


== Resources ==
== Resources ==
* [https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF#:~:text=The%20AI%20Risk%20Management%20Framework,products%2C%20services%2C%20and%20systems NIST AI Risk Management Framework]
* [https://www.nccoe.nist.gov/get-involved/attend-events/nccoe-learning-series-overview-nccoes-dioptra-aiml-test-platform NCCoE Learning Series: Overview of the NCCoE's Dioptra—An AI/ML Test Platform]
* [https://www.iso.org/obp/ui/en/#iso:std:iso-iec:tr:24027:ed-1:v1:en ISO/IEC TR 24027:2021(en) Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making]
* [https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0 ISO/IEC JTC 1. 2024. Standards by ISO/IEC JTC 1/SC 42 Artificial intelligence]
* [https://www.knowledgegraph.tech/ KG Conference Website]
* [https://standards.ieee.org/ieee/3396/11379/ Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility]


== Previous Meetings ==
== Previous Meetings ==

Latest revision as of 01:18, 7 May 2024

Session Risks and Ethics
Duration 1 hour
Date/Time 01 May 2024 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/6:00pm CEST
Convener Ken Baclawski

Ontology Summit 2024 Risks and Ethics

Agenda

  • Introduction to the Risk Panel Video Recording
  • Ken Baclawski Overview of Risks and Ethics Slides Video Recording
    • Chair, Ontolog Board of Trustees
  • Gabriella Waters
    • Gabriella Waters is an artificial intelligence and machine learning researcher on the ARIA team at NIST where she coordinates AI testing and evaluation across three teams. She is also the Director of Operations and the director of the Cognitive & Neurodiversity AI (CoNA) Lab at the Center for Equitable AI & Machine Learning Systems at Morgan State University in Baltimore, MD. She serves as the AI advisor and principal AI scientist at the Propel Center, where she is also a professor of Culturally Relevant AI/ML Systems. She is passionate about increasing the diversity of thought around technology and focuses on interdisciplinary collaborations to drive innovation, equity, Explainability, transparency, and ethics in the development and application of AI tools. In her research, Gabriella is interested in studying the intersections between human neurobiology & learning, quantifying ethics & equity in AI/ML systems, neuro-symbolic architectures, and intelligent systems that make use of those foundations for improved human-computer synergy. She develops technology innovations, with an emphasis on support for neurodiverse populations.
    • Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility
    • Audio Recording
  • Mike Bennett
  • Discussion Video Recording

Conference Call Information

  • Date: Wednesday, 1 May 2024
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

Resources

Previous Meetings

 Session
ConferenceCall 2024 04 24Applications
ConferenceCall 2024 04 17Applications
ConferenceCall 2024 04 10Synthesis
... further results

Next Meetings

 Session
ConferenceCall 2024 05 08Risks and Ethics
ConferenceCall 2024 05 15Synthesis
ConferenceCall 2024 05 22Communiqué