Tools for Trusted AI

Charles F Vardeman II

2023-09-05

Trusted AI Frameworks for Knowledge Engineering

What is Trusted AI – Let’s ask ChatGPT!

Trusted AI encompasses systems that are not only ethically aligned but also reliable, robust, secure, transparent, and accountable. While ethical considerations form a crucial aspect, Trusted AI extends to include the technical quality and performance of the system. The goal is to create AI technologies that various stakeholders—including users, developers, businesses, and regulators—can trust to behave as intended and to meet specific performance, security, and governance standards.

The concept integrates aspects from multiple domains, such as:

  • Ethics: Fairness, transparency, and accountability.
  • Reliability: Consistent and dependable system performance.
  • Robustness: Ability to handle unexpected inputs or conditions without failing.
  • Security: Measures to protect data and models from unauthorized access and tampering.
  • Transparency: Understandability of system decisions and actions.
  • Accountability: Mechanisms to trace actions back to the responsible entities for evaluation and auditing.

In this broader context, “trust” is a multidimensional construct. It’s not just about being ethical but about fulfilling a wide range of expectations and standards that make a system worthy of trust.

What is Trusted AI – Let’s ask Claude 2!

Trusted AI refers to artificial intelligence systems that are designed and validated to be reliable, safe and technically robust. Key principles of trusted AI include:

  • Reliability - AI systems consistently produce accurate, reproducible results that can be externally validated. Models are unbiased and decisions explainable.

  • Safety and security - AI systems are developed using secure software practices and are resilient against attacks, hacking or misuse. They protect privacy and confidentiality.

  • External oversight - There are independent audits and ongoing monitoring of AI systems to ensure transparency and identify issues. Public disclosure builds trust.

  • Human control - Humans remain fully in control of AI systems, which augment human intelligence rather than replace it. Autonomous systems have human supervision.

  • Proportionality - AI capabilities do not exceed what is appropriate for the task. AI is not used when less advanced technology suffices.

  • Technical robustness - AI systems are thoroughly tested for stability, scalability and corner cases. Failsafes prevent glitches or unintended harm.

In contrast to responsible AI, trusted AI emphasizes technical system properties over organizational processes. It focuses on engineering reliable, safe and externally validated AI, rather than holistic internal governance.

Our, evolving vision of Trusted AI

Responsible AI

  • Responsible AI: This term is broader and refers to the ethical design, development, and deployment of AI. This includes considerations not just of the AI’s technical behavior (e.g., is it transparent and accountable?), but also of the socio-economic implications, like job displacement, and broader ethical considerations like data privacy and environmental impact.

Foundational Components for Trusted AI

  • Automate the integration and deployment of code, ensuring quality and operational efficiency.
  • Standardized Development Environments: Establish consistent, easily replicable environments to accelerate development and experimentation.
  • Data & Experiment Versioning: Implement robust systems to track changes in data and experiments, allowing for traceability and repeatability.
  • Model Lifecycle Management: Streamline the training, deployment, monitoring, and updating of machine learning models.
  • Flexibility Across Layers: Design the architecture to allow for different levels of customization, from high-level APIs to low-level controls, facilitating adaptability.

Why a Framework?

This is a “living” set of slides!

Purpose: To quickly “Bootstrap” you into a research environment

Our Framework Starts with GitHub

(Step 1) Create a GitHub Account

(Step 2) Email GitHub Account ID

If you are not part of the nd-crane organization. Email pmoreira@nd.edu your GitHub ID so we can add it to the nd-crane organization

(Step 3) Go through GitHub Skills Introduction to GitHub

  • Go Through “First Day on GitHub”
    • Introduction to GitHub
    • Communicate using Markdown
    • GitHub Pages (We will use this with Quarto)
  • First week on GitHub
    • Review pull requests
    • Resolve merge conflicts
    • Release-based workflow
    • Connect the dots
    • Code with Codespaces

AI and Machine Learning

FastAI FastBook – “A Production Mindset”

A Preview…

Visual Studio Code

(Peter Only!) “Dev Containers” and FastAI

Using Codespaces to work with the “Practical Deep Learning for Coders” course

But Dr. Vardeman, I know all of this!

Getting Started With LLMs

Python

Python for Data Analysis