About AI Success Factors: Engineering Trust in Deployments
AI Success Factors: Engineering Trust in Deployments is an initiative of the Laboratory for Assured AI Applications Development (LA3D) at the University of Notre Dame’s Center for Research Computing. This blog delves into the intricate dynamics of artificial intelligence (AI) research and development, with a strong emphasis on AI engineering, trust, and knowledge engineering.
As the pace of AI evolution accelerates, understanding the key factors for successful deployment becomes critical. This blog focuses on the intersection of trust and engineering, two pivotal components that determine the efficacy and acceptability of AI applications in various societal contexts.
The Laboratory for Assured AI Applications Development operates with a commitment to “translational research,” a methodology that drives us to convert theoretical advancements in AI into practical applications. This commitment extends to the creation of an efficient and secure AI cyberinfrastructure, designed to support the deployment of trustworthy and effective AI applications.
AI Success Factors serves as a communication medium, helping to disseminate the work conducted by our lab and facilitating knowledge exchange within the AI community. We invite active engagement and encourage thoughtful discussions, aiming to foster a collaborative atmosphere that enriches our collective understanding of AI.
Our vision for AI Success Factors is to contribute to an AI future marked by trusted and efficient applications, where AI engineering is comprehensible, accessible, and actionable. The blog aims to provide insights and perspectives that enable readers to grasp the nuances of AI deployment, assisting them in navigating the ever-evolving landscape of AI research and development.
Whether you are a student, a professional, or an enthusiast interested in the progression of AI, AI Success Factors offers a wealth of knowledge and insights into the key aspects that shape the success of AI deployments. Through this platform, we hope to further the understanding and acceptance of AI and its many transformative possibilities.
Certainly, here is a possible addition to your “About” page that discusses the tools used to create and manage the blog:
Our Toolset
AI Success Factors: Engineering Trust in Deployments is more than just a blog - it’s a demonstration of how modern data science tools can streamline and simplify complex processes.
Our blogging platform is built using Quarto, an open-source scientific and technical publishing system known for its flexibility and robustness. Quarto has the unique ability to support a wide range of content types, including Jupyter notebooks, R Markdown, and even Python or R scripts. Quarto’s distinctive feature is its ability to include executable code within the documents from various languages such as Python, R, and others. This allows us to create data-driven blog posts where the output - be it a graph, a table, or other data representations - is generated directly from the included code. This adds an interactive element to our posts, giving readers a deeper understanding of the concepts and analyses presented. Quarto supports multiple output formats including HTML, PDF, EPUB, and Word, providing us the versatility to tailor our content to suit different needs. This flexible tool, coupled with its capability to handle larger projects made up of multiple files, allows us to efficiently manage our blog while maintaining consistency and quality across our posts. Quarto presentations are also used to generate revealjs presentations for weekly slide shows. Quarto forms the basis for Fast.ai’s nbdev framework for exploritory programming allowing for software development in Jupyter Notebooks and automated generation of Python modules and documentation from the notebooks.
Development Containers serve as our primary environment for data science, providing a consistent and replicable framework for running our analyses. These containerized environments allow us to standardize our work and ensure that our research is reproducible and reliable. The dev container uses the Mamba solver to provision the python environment from the specification in the environment.yml
.
We use Visual Studio Code (VS Code) as our primary code editor, taking advantage of its rich ecosystem of extensions and in-built features. VS Code’s Jupyter notebook support allows us to interactively develop and visualize our data models directly within the editor, enabling us to produce intuitive, data-driven narratives for our blog posts.
All of our content is version controlled and hosted on GitHub. GitHub’s robust versioning system allows us to effectively manage changes, track progress, and ensure that every piece of content we publish is up to date and accurate.
The blog is published using GitHub Page, a platform that simplifies the deployment process and seamlessly integrates with our existing GitHub repository. This setup enables us to provide a reliable, accessible resource for our readers, irrespective of where they are or when they choose to access our content.
Finally, we utilize GitHub Code Spaces to create a fully-featured, cloud-hosted development environment that can be accessed from any device. This not only allows us to work from anywhere but also ensures that our setup can be easily replicated by other researchers and developers who wish to explore our code.
Each tool in our stack has been chosen for its ability to facilitate efficient, reliable, and transparent data science. By sharing our toolset, we hope to provide insight into our workflows and encourage a culture of open, reproducible research within the AI community.
Generative AI in Content Creation
In creating AI Success Factors: Engineering Trust in Deployments, we harness the power of artificial intelligence in conjunction with human expertise. We utilize advanced Generative AI models such as ChatGPT, Bing Chat, and Anthropic Claude in a guided, iterative process that combines the best of AI and human capabilities.
Our use of Generative AI begins with the drafting stage. These models, trained on extensive datasets, generate human-like text that matches our specified tone and style, providing us with a solid foundation for each blog post. This allows us to focus on the larger narrative without getting mired in the nitty-gritty details from the outset.
The AI-generated drafts, while sophisticated, are just the starting point. We employ a methodology that involves guiding the AI to improve its output. This includes providing more detailed prompts, specifying the format we want the output in, or asking the model to think step-by-step before settling on a conclusion.
Once we have a draft that we’re satisfied with, it’s time for human intervention. Our team reviews the AI-generated content, refining and editing it to ensure it maintains the standards of accuracy, relevance, and depth that our readers expect. This process leverages critical human skills of creativity, critical thinking, and domain expertise that even the most advanced AI can’t replicate.
This approach exemplifies our vision of a symbiotic relationship between AI and humans. By effectively integrating Generative AI into our content creation process, we not only boost our productivity and efficiency but also demonstrate the practical application of AI technologies in real-world scenarios. Thus, the blog becomes a testament to our commitment to translational research and our pursuit of an AI-integrated future.