Background and Awareness
This guide is directed at instructors in order to provide practical approaches that can be taken on the teaching and learning questions that AI raises.
This guide suggests solutions that faculty can take when students have used AI unethically and clear definitions of what are ethical uses for students. It will urge our community to communicate expectations clearly and to approach our resources equitably. It will provide guidelines for keeping our own data safe. It will establish a set of core competencies that each student will have mastered prior to graduation. The core competencies will allow students to approach the workforce without apprehension, and their mastery of the core competencies will make our students stand out among their peers. The guide introduces instructors to university-licensed tools as they become available and resources available for teaching and mentoring. Faculty will also find strategies to improve their pedagogy. Approaching AI as a community offers the brightest way forward for each of us and for the possibilities that this new technology can provide.
This guide will provide an overview of the following topics: teaching and learning AI usage perspective; AI core competencies for students; pedagogical support; and tools and resources.
As AI continues to evolve and integrate into various aspects of society, it's essential for users to approach its adoption with awareness and caution:
Understand the Limitations: AI systems are not infallible and may exhibit biases, errors, or limitations. It's crucial to recognize the scope and capabilities of AI algorithms and their potential impact on decision-making processes.
Data Quality Matters: AI models rely on large datasets to learn and make predictions. Ensuring the quality, diversity, and representativeness of training data is essential for the accuracy and fairness of AI systems.
Ethical Considerations: AI technologies raise ethical concerns related to privacy, transparency, accountability, equitable access, and societal impact. Organizations and individuals should prioritize ethical principles and responsible AI practices in the development and deployment of AI systems.
Human Oversight and Intervention: While AI can automate tasks and augment human capabilities, human oversight and intervention are still necessary to monitor AI systems, interpret results, and make informed decisions based on AI-generated insights.
Continuous Learning and Adaptation: AI is not static; it requires continuous learning, adaptation, and refinement over time. Investing in ongoing training, validation, and improvement of AI models is essential to ensure their effectiveness and relevance in evolving environments.
Collaboration and Interdisciplinary Approach: Effective AI deployment often requires collaboration between domain experts, data scientists, software engineers, and other stakeholders. Embracing an interdisciplinary approach and fostering collaboration across teams can lead to more successful AI initiatives.
Transparency and Explainability: AI systems should be designed to be transparent and explainable, allowing users to understand how decisions are made and assess the reliability of AI-generated outputs. Prioritizing transparency and explainability enhances trust and accountability in AI applications.