What Faculty Lose When They Ignore Generative AI in the University Classroom
Which questions about AI in teaching will I answer and why do they matter?
Faculty on the front lines face a flood of practical and ethical questions about generative AI. Decisions made now will shape student learning, assessment integrity, and departmental reputation for years. I will answer these six questions because they reflect the choices instructors and department heads wrestle with each semester:

These questions matter because ignoring them is not a neutral choice. Doing nothing is a decision that leaves instructors reactive, students confused about expectations, and programs vulnerable to quality erosion. I’ll use concrete examples and scenarios to make the implications clear and give practical steps you can apply next semester.
What exactly does generative AI change about teaching, assessment, and learning?At a basic level, generative AI alters three things simultaneously: what's easy for students to produce, how instructors can detect or verify authorship, and what skills are most valuable to teach. Think of AI as a new technology like the calculator: it changes what routine work students can do quickly, which forces a rethinking of whether an assignment tests procedural skill or deeper understanding.
Concretely:
Work product availability: Essays, code, problem solutions, and design mockups can be produced in minutes. A student can generate a draft, iterate, and submit without showing the learning process. Assessment signal-to-noise: Traditional signals—clean prose, correct formatting, or functional code—no longer reliably indicate mastery. Students can present polished outputs without mastering underlying concepts. New literacies: Students must learn to prompt effectively, evaluate AI output critically, and document AI assistance ethically. Those are skills many faculty did not plan to teach, yet they matter for future workplaces.Example scenario: In an introductory programming course, a student uses AI to write a program that compiles and passes tests but can't explain the algorithm during an office hour. The instructor must decide whether the grade should reflect the working program or the student's understanding. That split is the central change AI introduces.
Is banning AI enough to preserve academic integrity?Bans give the illusion of control but often fail in practice. Consider a roadblock: if students can easily access AI on their phones, a blanket ban is like asking them to drive without GPS in a city where everyone uses it. Some will comply, some will flout the rule, and many will be uncertain about what counts as allowed help.
Why blogs.ubc bans fall short:
Enforcement is costly and imperfect. Detection tools produce false positives and false negatives, and manual checks consume faculty time. Bans don't teach students how to use AI responsibly. Without instruction, students who must use AI in internships or jobs will be unprepared. Equity issues arise. Students with access to premium tools at home have an advantage over peers who do not.Better approach: A policy that emphasizes transparency and learning outcomes. Ask students to disclose AI use and to submit process artifacts - outlines, version histories, annotated AI outputs, or short reflective statements explaining how AI contributed and what the student learned. That preserves integrity while building useful skills.
How can instructors practically redesign assignments and assessments to account for AI?Redesigning does not mean eliminating essays or coding projects. It means shifting emphasis from final product to process, critical judgment, and authentic tasks that align with real-world competencies.
Practical steps to try next semester:
Shift to staged assessments. Require a proposal, an annotated AI-assisted draft, and a defense. Each stage reveals understanding and reduces the incentive to submit a final polished product without having done the work. Use in-person or recorded oral components. Short oral exams or video explanations where students walk through their reasoning make authorship and comprehension visible. Design questions that require personal data, local context, or iterative work. For example, ask for an analysis of a campus dataset or an interview-based report that AI cannot fabricate accurately. Create transparent rubrics that reward critique of AI-generated content. Ask students to submit AI outputs and then annotate what they accepted, what they corrected, and why. Use open-book formats that focus on problem framing, synthesis, and argumentation rather than recall. For instance, present a messy case study and grade on how well students weigh evidence and propose interventions.Example assignment redesign: Instead of a single 2,000-word literature review, ask for (a) a two-page annotated bibliography with sources and marginal notes, (b) a synthesis matrix showing themes across sources, and (c) a 10-minute recorded presentation explaining the synthesis choices. This makes it hard to outsource everything to AI and centers scholarly judgment.
When should departments invest in faculty training, new policies, or AI tools - and what advanced choices exist?Departments should treat AI as an ongoing curricular and operational issue, not a one-off checklist item. Start with an assessment of risk and capacity: which courses are most vulnerable to misuse, which faculty feel unprepared, and what institutional support is available.
Three tiers of departmental actions:
Low-cost, immediate steps: Create clear, public policies about AI use; provide exemplar assignment templates; hold workshops on documenting student process work. These require modest time but high clarity. Medium investment: Fund faculty development programs on prompt design, AI literacy, and redesigning assessments. Purchase campus licenses for AI tools that support teaching and set equitable access. Higher-cost, strategic investments: Integrate AI literacy into general education, hire instructional designers, and develop campus-wide assessment platforms that track student drafts and version histories.To make decisions, departments often ask about cost and tools. Below is a simple comparison of options to illustrate trade-offs.
Option Typical Cost Range Pros Cons Policy drafting and workshops Low (staff time) Fast to implement, improves clarity Relies on faculty follow-through Campus AI tool licenses Medium (per-seat or campus license) Equitable access, curricular integration Budgetary commitments, vendor dependence Instructional design hires High (new positions) Scales redesign, sustained support Requires long-term fundingExample scenario: A department with a large introductory course might invest in a part-time instructional designer to redesign assessments in five high-enrollment sections. The upfront cost is higher than running a policy workshop, but it reduces grading load and improves learning outcomes across hundreds of students.
What higher-level curriculum changes should departments consider?Beyond assignments and policies, AI invites a rethink of what majors teach. Rather than treating AI as an add-on module, many departments will find value in integrating AI literacies into core learning goals.
Consider these curriculum moves:
Embed AI literacy outcomes: Require students to demonstrate the ability to evaluate AI outputs, detect bias, and reflect on ethical implications as part of existing course outcomes. Create cross-disciplinary offerings: Joint courses between computer science and humanities can teach prompt design, critical evaluation, and contextual analysis. Foster authentic partnerships: Partner with local employers or community groups where students must use AI tools to solve real problems, then report on both outcomes and the tool's limitations.Analogy: Updating a curriculum for AI is like renovating a house before the rainy season. You can patch leaks one room at a time, but long-term resilience comes from rewiring, adding insulation, and improving drainage. Departments that only tweak assignments will still face systemic challenges later.
How will AI shape the future of higher education over the next decade?Predicting exactly is impossible, but we can map plausible trajectories. Think of AI's influence as three overlapping waves: acceleration of certain tasks, redefinition of key competencies, and institutional change in pedagogy and labor.
Possible developments to plan for:

Concrete example: A mid-size university introduces an AI literacy requirement in general education. First-year students must complete a course that covers prompt construction, bias detection, and ethical decision-making. Graduates show stronger workplace readiness, and employers begin to recognize the microcredential, which boosts enrollment in that program.
Final metaphor: Treat AI as a tide rather than a single storm. Standing on the shore and refusing to acknowledge the water will leave departments soaked when waves reach the campus. Those who study tidal patterns, build levees where needed, and learn to navigate the currents will preserve what matters - rigorous learning and meaningful assessment - while giving students skills that reflect the world outside the classroom.
Action checklist for the next semester Draft a short, clear AI policy for your course that emphasizes disclosure and learning artifacts. Convert at least one summative assignment into a staged process with a reflective component. Offer one workshop or short module on evaluating AI outputs and documenting AI use. Talk with your department chair about budget lines for instructional support or equitable tool access. Collect student feedback on AI-related confusion and adapt communications accordingly.Ignoring AI is not a neutral act. It cedes control over pedagogy and learning outcomes. By asking the right questions, trying concrete redesigns, and treating policy and training as part of curriculum, faculty can protect academic standards and prepare students for a world where AI is part of the toolkit, not the answer key.