Presenter Tips
General Principles
Section titled “General Principles”Narrate Before Running
Section titled “Narrate Before Running”Always explain what you’re about to do before executing a prompt:
- Intent — “I’m going to ask Copilot to…”
- AI Action — Run the prompt, let the audience watch
- Validation — “Notice how it picked up on…”
This structure helps the audience follow along even if the output is dense.
Keep Diffs Small and Visible
Section titled “Keep Diffs Small and Visible”- Prefer one backend service (
order-serviceorpayment-service) for consistency - Avoid prompts that touch many files simultaneously
- Use VS Code’s diff view to highlight changes
Manage Expectations
Section titled “Manage Expectations”- “AI output varies each run — the structure will be consistent, but details may differ.”
- “We always review AI-generated code before merging.”
Pacing
Section titled “Pacing”| Audience | Suggested Total Time | Focus Areas |
|---|---|---|
| Executives | 10 min | Plan Agent, Review Agent, Security Overview |
| Engineering leads | 20 min | Full end-to-end sequence |
| Developers | 30+ min | Full sequence with live coding follow-ups |
Timing Tips
Section titled “Timing Tips”- Allow 10–15 seconds for Copilot to generate responses
- Don’t rush to fill silence while Copilot is working — narrate what’s happening
- If a response is still generating, preview what to expect
Demo Environment Setup
Section titled “Demo Environment Setup”Recovery Strategies
Section titled “Recovery Strategies”Copilot Gives a Broad or Off-Topic Response
Section titled “Copilot Gives a Broad or Off-Topic Response”Follow up with: “Make this minimal and repo-specific.”
This grounds Copilot back to the project context.
Response Takes Too Long
Section titled “Response Takes Too Long”- Switch to a pre-prepared example while waiting
- Narrate: “While this generates, let me show you what the typical output looks like…”
Code Doesn’t Compile or Tests Fail
Section titled “Code Doesn’t Compile or Tests Fail”- Don’t panic — this is a teaching moment
- Narrate: “This is exactly why we review AI-generated code. Let’s see what needs adjusting.”
- Fix one issue live to show the iterative workflow
Agent Doesn’t Respond
Section titled “Agent Doesn’t Respond”- Check that the agent name is spelled correctly (e.g.,
@bdd-specialist) - Try refreshing the Copilot Chat panel
- Have a backup screenshot of typical output
Storytelling Arc
Section titled “Storytelling Arc”For maximum impact, structure the demo as a story:
- Setup — “We have a bicycle e-commerce platform with 6 microservices…”
- Challenge — “The PM wants a wishlist feature. Let’s see how Copilot helps across the entire SDLC.”
- Journey — Walk through Plan → Code → Review → Test → Deploy → Secure
- Resolution — “In 20 minutes, we went from idea to implementation plan, working code, tests, CI pipeline, and security validation — all with AI assistance.”
Common Questions & Answers
Section titled “Common Questions & Answers”Q: Does Copilot replace developers?
“No — it’s a force multiplier. Developers still make design decisions, review output, and own quality. Copilot handles the repetitive parts so developers focus on what matters.”
Q: How does it know about our codebase?
“Copilot reads the repository context — file structure, imports, naming conventions, configuration files — and uses that to generate contextually relevant code.”
Q: Is the generated code secure?
“AI-generated code should go through the same review process as human-written code. That’s why we showed the Review Agent and CodeQL — they catch issues regardless of who wrote the code.”
Q: What about data privacy?
“Copilot processes code in context but doesn’t store or train on your private repository code. Check GitHub’s Copilot trust documentation for the latest details.”