Most AI trainings leave people informed but stuck. They cover what AI is, maybe show a few demos, and send everyone back to their desks with no clear next step.
This is where your mission meets AI. And where clarity begins.
In these workshops, faculty and leadership step back from the noise and engage the real questions:
What are we protecting?
What are we building?
What are we becoming?
Whether your team is brand new to AI or already using it and wanting to do it better, these sessions give you a shared foundation, practical skills, and the language to move forward together.
K-12 teachers and administrators
University departments and leadership teams
Mid-size values-based businesses and nonprofits
Professional associations
Faculty and staff navigating AI in their day-to-day work
Teams that want to move from conversation to action
A shared understanding of what generative AI is and what it isn't
Hands-on experience using AI tools in ways that are relevant to your work
A clearer sense of how to align AI use with your organization's values
Practical, repeatable strategies for integrating AI responsibly
Language to keep the conversation going after the session ends
This is not a lecture. This active learning built around your institutional mission and current AI strategy.
Faculty are:
Discussing real scenarios
Working through tensions
Asking better questions
Building shared language
Faculty leave with clarity, not just information.
Clear understanding of AI risks and opportunities
Shared language across faculty and leadership
Practical next steps aligned to your mission
Confidence to move forward intentionally
A clear, accessible, and interactive introduction to generative AI — built for people who want honest answers, not hype. Breaks down core concepts, clears up common misconceptions, and builds the confidence participants need to move forward.
This session answers:
What is AI, really and what isn't it
How does it work?
Why does it matter for our work
How do we start using it well?
Participants leave with a solid foundation and the confidence to keep exploring.
Reviews legal and ethical risks using real examples, case studies, and live Q&A. Focused on building best practices and guidelines that actually fit your organization's operations — not generic rules nobody follows.
This session answers:
How do we use AI safely?
What tools and behaviors put our organization at risk?
What does responsible AI look like in our specific context?
How do we build guardrails people will actually follow?
Participants leave with a grounded understanding of how to navigate AI responsibly and a starting point for policy.
Available as a standalone session or series.
A professional development experience designed to move faculty from anxiety to confident, informed AI practice. Addresses common fears, clears up misconceptions, and builds practical fluency in an institutional context without dismissing the real concerns educators bring to the room.
This session answers:
Why does AI feel so threatening and is that fear warranted?
What do I actually need to know to use this responsibly?
How do I use AI without compromising my integrity or my students' learning?
What does AI-literate teaching actually look like?
Participants leave ready to engage with AI thoughtfully, not just tolerate it.
A hands-on professional development experience equipping faculty to integrate AI tools thoughtfully into their teaching, research, and course design. Customized to the disciplinary context and institutional culture, not a template applied to every department the same way.
This session answers:
Which AI tools are actually worth using for my work?
How do I integrate AI without compromising my pedagogical values?
What does intentional AI use look like in my discipline?
How do I help my students build AI fluency alongside the skills that matter most?
Participants leave with practical strategies and a plan they actually built, not one handed to them.
Available as half-day, full-day, or multi-session formats.
An immersive, hands-on session that walks participants through building a custom AI bot, no coding required. Includes step-by-step guidance, live testing, and real examples drawn from institutional contexts. Participants leave with a fully built bot and the skills to build more.
This session answers:
What can a custom AI bot actually do for my work?
How do I build one, really, step by step?
How do I make sure it's safe and appropriate for my context?
What do I do with it after today?
Participants leave with a working bot they built themselves.
A hands-on, full-day intensive that takes participants from zero to confident Boodlebox user, no experience required. Moves through three progressive stages: getting comfortable with the platform, integrating Boodlebox into classroom and instructional practice, and building custom bots from scratch. Culminates in bot chaining connecting bots into automated, multi-step workflows tailored to your own institutional context.
This session answers:
How do I actually get comfortable with this platform?
How do I use it in my teaching or administrative work?
How do I build bots and connect them into something more powerful?
What can I keep building after this?
Participants leave with working bots they built themselves and the skills to keep building.
Available as a full-day or multi-session format.
A facilitated session that helps leadership teams think clearly before committing to policy language. Explores the ethical, practical, and cultural dimensions of AI governance so what gets written reflects genuine institutional values, not just reactive compliance.
This session answers:
What are we actually trying to protect or promote with a policy?
What values should drive our AI governance?
What are the risks of getting this wrong, and what does getting it right look like?
How do we write something people will understand and actually follow?
Participants leave with the foundation to write policy that holds up and the confidence to defend it.
Ideal for academic leadership and governance groups.
A facilitated discovery session that helps leadership teams develop a values-based AI framework unique to their organization. Designed to answer the question every institution eventually has to face: what does responsible AI look like here?
This session answers:
What principles should guide how we use AI?
How do our values translate into concrete decision-making?
How do we build a framework our people can actually use?
What does accountability look like when AI is involved?
Participants leave with a working draft of their AI ethics framework, not just a conversation.
Ideal for leadership teams and governance groups.
Workshops are often where the best consulting relationships begin. Once your team has a shared foundation, many organizations choose to go deeper, developing a full AI strategy, building policy, or designing a curriculum that sticks.
If you're thinking beyond a single session, consulting is the natural next step.
All engagements begin with a complimentary discovery conversation.