2023-09-05
Trusted AI encompasses systems that are not only ethically aligned but also reliable, robust, secure, transparent, and accountable. While ethical considerations form a crucial aspect, Trusted AI extends to include the technical quality and performance of the system. The goal is to create AI technologies that various stakeholders—including users, developers, businesses, and regulators—can trust to behave as intended and to meet specific performance, security, and governance standards.
The concept integrates aspects from multiple domains, such as:
In this broader context, “trust” is a multidimensional construct. It’s not just about being ethical but about fulfilling a wide range of expectations and standards that make a system worthy of trust.
Trusted AI refers to artificial intelligence systems that are designed and validated to be reliable, safe and technically robust. Key principles of trusted AI include:
Reliability - AI systems consistently produce accurate, reproducible results that can be externally validated. Models are unbiased and decisions explainable.
Safety and security - AI systems are developed using secure software practices and are resilient against attacks, hacking or misuse. They protect privacy and confidentiality.
External oversight - There are independent audits and ongoing monitoring of AI systems to ensure transparency and identify issues. Public disclosure builds trust.
Human control - Humans remain fully in control of AI systems, which augment human intelligence rather than replace it. Autonomous systems have human supervision.
Proportionality - AI capabilities do not exceed what is appropriate for the task. AI is not used when less advanced technology suffices.
Technical robustness - AI systems are thoroughly tested for stability, scalability and corner cases. Failsafes prevent glitches or unintended harm.
In contrast to responsible AI, trusted AI emphasizes technical system properties over organizational processes. It focuses on engineering reliable, safe and externally validated AI, rather than holistic internal governance.
“Course22/06-Why-You-Should-Use-a-Framework.Ipynb at Master · Fastai/Course22.”
Accessed August 29, 2023. https://github.com/fastai/course22/blob/master/06-why-you-should-use-a-framework.ipynb.
If you are not part of the nd-crane organization. Email pmoreira@nd.edu your GitHub ID so we can add it to the nd-crane organization
Using Codespaces to work with the “Practical Deep Learning for Coders” course