Introducing the Canvas Permissions Planner, Built Entirely with AI

By Shane Argo, CEO, All the Ducks

If you've ever built a Canvas LTI tool or a REST API integration, you've hit this wall: which role permissions does your API user actually need?

Canvas doesn't document this. There's no table that says "this endpoint requires these permissions". Instead, you're reading API docs, looking at permission names, making educated guesses, changing a role's permissions, waiting for the changes to propagate, and testing. If it still doesn't work, you guess again. It's slow, frustrating work, and it gets worse the more endpoints your integration touches.

The information does exist, though. It's in the Canvas source code. Every API endpoint has permission checks in the Ruby controllers, and if you know where to look and how to read them, you can map out exactly which permissions each endpoint requires.

We work with Canvas every day, and I knew this data was in there. What I didn't have was a practical way to extract it across every endpoint in the API. Doing that manually would have taken weeks. Then it occurred to me: this is exactly the kind of task AI is good at. Point it at the source code, tell it what to look for, and let it do the systematic extraction while I validate the results.

That was the moment the tool became possible. What started as a mapping exercise turned into a markdown document, and then into a fully built web application, all using AI.

What the Canvas Permissions Planner does

The Canvas Permissions Planner (https://canvas-permissions.alltheducks.com) is a free, open-source tool that maps Canvas REST API endpoints to the role permissions they require. It covers 464 endpoints and 117 distinct permissions.

Select the endpoints your tool or integration calls, and the Permissions Planner tells you exactly which permissions your API user needs. The results are grouped, deduplicated, and ready to configure. No more guessing then waiting for permission changes to take effect just to test a theory.

It runs entirely in the browser. Nothing is sent to a server, there's no account to create, and no data is stored. It's a static site hosted on GitHub Pages.

A few things that might surprise you for a free community tool: it supports 30 languages, has full dark mode, print-optimised layouts, and copy-to-clipboard for sharing permission lists. Some permissions are also flagged as optional (for example, permissions that only apply when you need to use the SIS ID fields), so you can see which ones are strictly required and which depend on your configuration.

The source code is on GitHub (https://github.com/AllTheDucks/canvas-api-permissions-planner). Please use it, fork it, and contribute to it.

How we built it

I built the Canvas Permissions Planner using Claude Code, Anthropic's AI coding tool. The entire project, from the initial data extraction through architecture planning and deployment, was developed through a collaboration between me and an AI. I didn't write a single line of code, but I have read and approved every line.

It started with the data. I pointed Claude Code at the Canvas LMS source code on GitHub and directed it to systematically extract the permission requirements from the Ruby controllers for every API endpoint. I knew what the permission checks looked like and where to find them in the codebase. The AI could do the methodical work of going through hundreds of controllers and pulling out the mappings. I reviewed and spot validated the results. That core dataset is what makes the tool possible, and it's the part that simply wouldn't have been feasible to produce manually for a free community tool.

From there, the project grew into a full frontend web application. The collaboration looked like this: 80 sessions over 17 days, roughly 41 hours of active development time, and approximately 7,000 human messages. Every one of those messages was a direction, a correction, a confirmation, or a question. This wasn't a case of typing a prompt and getting a finished product. It was intensive, iterative work where I brought the domain expertise and the AI brought speed and breadth.

The approach mattered as much as the tool. I spent more time writing architectural plans than writing code. Before any feature was built, the approach was mapped out: how should the data be structured? What's the right way to handle localisation when Canvas uses a non-standard i18n key format? How do we validate external data loaded at runtime? Every one of those questions required someone who understands Canvas, understands software architecture, and knows which trade-offs are worth making. AI doesn't have that judgement. A human expert does.

Here's an example of what that looks like in practice. I wanted the tool's URLs to be sharable, so someone could select a set of API endpoints and send the link to a colleague. The AI's first instinct was to add a URL parameter for every selected endpoint. That works, technically. But if you select 30 endpoints, you get a URL that's hundreds of characters long and breaks when you paste it into an email or a chat message. I directed a different approach that produces compact, sharable URLs regardless of how many endpoints are selected. The AI built it quickly once it had the right direction. Without that direction, it would have shipped something that technically worked but practically failed.

The AI's contribution was genuine, though. Once the direction was set, it moved fast. It could draft an implementation, suggest alternatives, catch edge cases, and handle the mechanical work of building features to spec. The result was a partnership where each side did what it does best.

What else AI made possible

The data extraction was the big one, but it wasn't the only thing AI made feasible.

Localisation into 30 languages

I can build a language-switching mechanism. But for a free, open-source tool, there's no budget to hire translation services for 30 languages. AI made it feasible. The translations may need polish from native speakers in some cases, but the tool is accessible to a global Canvas community that it wouldn't have reached otherwise.

The level of polish

Dark mode, accessibility compliance, print stylesheets, copy-to-clipboard. I have the skills to build all of these. But for a side project? I wouldn't have bothered with most of them. With AI, the cost-benefit calculation shifts. Features that were "I could, but it's not worth my time" became "ask, review, ship". The result is not a more polished tool than I could have produced alone, but a more polished tool than I would have produced alone.

The pattern across all of this isn't just about speed. AI didn't just make building faster. It made things feasible that weren't before. That's a distinction worth thinking about.

What comes next

This is the beginning of a conversation. Over the coming weeks, I'll be writing about what this experience has once again confirmed about working with AI: why planning matters more, when AI is involved; why domain expertise is the essential ingredient that makes AI useful; and what this means for universities thinking about AI in their own workflows.

If you work with Canvas, try the Permissions Planner (https://canvas-permissions.alltheducks.com) and let us know what you think. If you're interested in how AI is changing the way we build tools for higher education, follow All the Ducks on LinkedIn (https://www.linkedin.com/company/all-the-ducks) or connect with me directly (https://www.linkedin.com/in/shaneargo)

And if you're heading to CanvasCon 2026 in Sydney (https://www.instructure.com/en-au/events/canvascon/anz) this August, come find us at the booth.

All the Ducks is an Australian EdTech consultancy specialising in learning technology and digital transformation for universities. We're an Instructure Canvas Partner.

The Canvas Permissions Planner is free and open source: canvas-permissions.alltheducks.com | GitHub (https://github.com/AllTheDucks/canvas-api-permissions-planner)

Next
Next

Implementing LTI 1.3 Launch Without Cookies Using AWS Lambdas: Adhering to the LTI Specification