Tested prompts for AI-assisted STEM teaching.
I teach [GRADE] [SUBJECT]. Create a [DURATION]-minute lesson aligned to [SPECIFIC NGSS STANDARD]. Include: - A real-world hook that connects to students' lives - 3 differentiated activities (approaching, on-level, extending) - One formative assessment check - Materials list (assume a typical classroom budget) Format the lesson as: Objective > Hook > Activities > Assessment > Extension
Here is my existing lesson: [PASTE LESSON] Differentiate this for three levels: 1. Approaching: Students who need scaffolding with [SPECIFIC SKILL] 2. On-level: Students meeting grade-level expectations 3. Extending: Students ready for deeper challenge Keep the same core learning objective but adjust complexity, vocabulary, and support structures.
Create 5 inquiry-based [SUBJECT] problems for [GRADE] students, one per day. Requirements: - Each problem should take 15-20 minutes - Progress from concrete to abstract across the week - Include at least one problem that connects to [CAREER/REAL-WORLD CONTEXT] - Aligned to [NGSS STANDARD] - Include teacher notes for common misconceptions
Create a 4-level rubric (Beginning, Developing, Proficient, Advanced) for this standard: [PASTE NGSS PE] For each level, describe: - What the student CAN do at this level - Observable evidence (what you would see/hear) - Example student work or response Format as a table.
Generate a [SUBJECT] explanation of [TOPIC] for [GRADE] students. IMPORTANT: Intentionally include 2-3 subtle errors that a student should be able to catch using: - Their textbook - A calculator - Prior knowledge from [PREVIOUS UNIT] I will use this as a "Check the Machine" classroom activity. Do NOT tell me where the errors are.
I asked an AI to write this code: [PASTE CODE] The code is supposed to: [DESCRIBE INTENDED BEHAVIOR] But instead it: [DESCRIBE ACTUAL BEHAVIOR] Walk me through the debugging process step by step. Explain what each line does, identify the error(s), and show the corrected version with comments explaining the fix.
I used an AI to generate the following educational content for [GRADE] [SUBJECT]: [PASTE AI-GENERATED CONTENT] You are a fact-checker. For this content: 1. Extract every factual claim, number, formula, and standard code 2. Rate each as: ✅ Verified, ⚠️ Unverified, or ❌ Incorrect 3. For any item rated ⚠️ or ❌, explain the issue and provide the correct information with a source 4. Flag any claims that "sound authoritative" but cannot be independently verified 5. Check that every NGSS/CCSS/CSTA code cited actually exists and that the content matches what the standard requires End with an overall verdict: Keep as-is / Fix with specific edits / Reject and regenerate.
I want to teach my [GRADE] students about AI hallucination. Generate a [SUBJECT] problem that is likely to cause AI models to produce plausible-sounding but incorrect answers. Requirements: - The topic should be something students have recently studied: [RECENT UNIT/TOPIC] - The problem should involve at least one calculation or specific factual claim - It should be checkable by a student using their textbook and a calculator - DO tell me the correct answer so I can prepare, but format it at the very bottom under a "TEACHER KEY" heading I will have students ask this same problem to an AI and then verify the output using the Check the Machine protocol.
Here is an AI-generated explanation of [TOPIC] for [GRADE] students: [PASTE AI OUTPUT] Extract every single factual claim in this text and list them in a numbered table with columns: | # | Claim | Type (fact/number/citation/standard) | Verifiable? (Yes/No) | How to verify | Do NOT evaluate whether the claims are correct — I want my students to do that part. Just give me the inventory so I can turn it into a verification worksheet.
Create 4 short AI-style explanations of [TOPIC] for [GRADE] students. Each should be 3-5 sentences long and written in the confident, fluent style typical of AI-generated content. - Explanation A: Completely correct - Explanation B: Contains a subtle calculation error - Explanation C: Contains a factual error disguised in correct-sounding language - Explanation D: Cites a standard that doesn't exist or doesn't match the content Label them A, B, C, D but do NOT reveal which has which error. Include an answer key at the bottom under "TEACHER KEY" with the specific error in each. I'll use these as a group activity where students rank them from most to least trustworthy and defend their reasoning.
An AI generated this solution to a math problem for my [GRADE] class: Problem: [PASTE PROBLEM] AI's Solution: [PASTE SOLUTION] Audit this solution by: 1. Rewriting the solution from scratch WITHOUT looking at the AI's work 2. Comparing your solution to the AI's step-by-step 3. Flagging every step where the AI's reasoning diverges from yours 4. Rating the overall solution: Correct / Partially correct / Incorrect 5. If incorrect, show exactly where the error entered and how it propagated Format this as a side-by-side comparison I can use as a classroom example.
I want to run a "Check the Machine" (CtM) verification activity in my [GRADE] [SUBJECT] class on [TOPIC]. The CtM protocol has 4 steps: 1. TASK — What you asked the AI to do 2. BEFORE — Your expectation/prediction BEFORE seeing the output 3. AFTER — What the AI produced vs. your prediction 4. TAKEAWAY — What the comparison reveals about the tool AND your own understanding Design a 20-minute classroom activity using this protocol: - Write the specific prompt students will give to an AI - Write a model "Before" response showing what a student should predict - Identify 2-3 specific things students should look for in the "After" comparison - Write a model "Takeaway" response - Include a simplified version for students who need scaffolding - Include an extension for students who finish early
I'm teaching [GRADE] students about critically evaluating AI outputs in [SUBJECT]. Generate 6 Socratic discussion questions that: - Progress from surface-level to deep critical thinking - Challenge the assumption that "if the AI sounds confident, it's probably right" - Include at least one question about the limits of their own expertise in catching errors - Include at least one question about what makes AI errors DIFFERENT from human errors - Are specific to [TOPIC] (not generic "AI ethics" questions) For each question, include a brief facilitator note on what you're hoping students will discover.
Help me draft a classroom AI use policy for my [GRADE] [SUBJECT] class. The policy should be verification-based (not ban-based). It should: - Clearly state that AI tools are allowed when used with verification - Require students to use the Check the Machine (CtM) protocol for any AI-assisted work - Define what "verification" looks like for different types of assignments (homework, labs, essays, code) - Distinguish between productive AI use (learning) and unproductive AI use (copying) - Include 3 specific examples of acceptable use and 3 of unacceptable use - Be written in student-friendly language - Fit on one page Tone: Clear, firm, but not punitive. Frame AI as a tool that requires skill to use well.
Write MakeCode JavaScript for BBC micro:bit V2 that: [DESCRIBE WHAT THE PROGRAM SHOULD DO] Sensors to use: [LIST SENSORS] Output: [LED display / sound / radio / serial] Include comments explaining each block. Also provide the equivalent MakeCode blocks description so I can rebuild it visually.
I want my [GRADE] [SUBJECT] students to use BBC micro:bit sensors to investigate: [SCIENCE QUESTION] Design a data collection plan that includes: - Which sensor(s) to use and why - How often to collect readings - How many trials/samples needed - A data table template - Potential sources of error and how to control them - How to connect findings to [NGSS STANDARD]
I have a BBC micro:bit V2 program written in MakeCode JavaScript: [PASTE MAKECODE JS] Translate this to MicroPython for the micro:bit V2 Python editor (python.microbit.org). Requirements: - Use only micro:bit MicroPython libraries (from microbit import *) - Preserve the exact same behavior and timing - Add comments explaining any differences between the MakeCode and MicroPython versions - Flag any MakeCode features that don't have a direct MicroPython equivalent - Include import statements at the top After the code, write 3 things students should CHECK to verify the translation is correct before flashing to the device.
The BBC micro:bit V2 [SENSOR NAME] sensor has a known calibration issue: [DESCRIBE THE ISSUE — e.g., "temperature sensor reads CPU die temp, typically 3-8°C above ambient"] Design a calibration experiment for [GRADE] students that: - Uses a reference measurement tool they'd have in a classroom (e.g., thermometer, ruler, phone compass) - Has them collect at least 10 paired readings (micro:bit vs reference) - Teaches them to calculate and apply a correction offset in code - Connects to [NGSS STANDARD] - Takes no more than 20 minutes Include the MakeCode JavaScript code for both the uncalibrated and calibrated versions. Frame this as an engineering design activity, not just a "fix the sensor" exercise — calibration IS real engineering.
Write a BBC micro:bit V2 program that combines these sensors: [LIST 2-3 SENSORS] The program should: - Read all sensors at a regular interval - Combine the readings into a meaningful output (e.g., a "comfort index", an "earthquake alert", a "weather station") - Display results on the 5×5 LED matrix in a way students can interpret - Include at least one conditional (if/else) based on sensor thresholds - Log data to serial output for later analysis Provide BOTH MakeCode JavaScript and MicroPython versions. For each version, add comments that explain: - What each sensor reading means physically - Why you chose those threshold values - What could go wrong (sensor limitations, edge cases) End with 3 extension challenges students could try on their own.
I asked an AI to write micro:bit code, but it doesn't work correctly on the physical device. Code: [PASTE CODE] Expected behavior: [WHAT IT SHOULD DO] Actual behavior: [WHAT ACTUALLY HAPPENS ON THE DEVICE] Programming environment: [MakeCode / MicroPython] Debug this using the Check the Machine framework: 1. TASK: Restate what the code should do in plain language 2. BEFORE: List what each sensor/output SHOULD produce given the code logic 3. AFTER: Identify the mismatch between expected and actual behavior 4. FIX: Show the corrected code with comments on every line you changed 5. TAKEAWAY: Explain what category of error this was (syntax, logic, hardware limitation, AI hallucination) and how to catch it next time IMPORTANT: The micro:bit temperature sensor reads CPU die temperature (3-8°C above ambient). The compass requires calibration on first use. The light sensor uses the LED matrix and may flicker. Flag if any of these known issues might be the cause.
I teach [GRADE] [SUBJECT] and have a BBC micro:bit V2 with these onboard sensors: temperature, accelerometer, compass, light level, microphone, touch logo, buttons. For my grade level and subject, create a crosswalk table: | Sensor | Science Phenomenon | Driving Question | NGSS PE | Activity Sketch (2-3 sentences) | Requirements: - Only include NGSS PEs that genuinely match the activity (verify the codes are real) - Driving questions should be testable with the micro:bit sensors - Activity sketches should be realistic for a [DURATION]-minute class period - Include at least one activity that uses 2+ sensors together - Flag any activities that need calibration and explain why
Design a complete IoT lesson for [GRADE] [SUBJECT] using the BBC micro:bit V2. Topic: [TOPIC / PHENOMENON] Sensor(s): [LIST SENSORS] Standard: [NGSS PE or CSTA STANDARD] Duration: [MINUTES] Structure it using the CRAFT cycle: - **Contextualize** (5 min): Real-world connection and career link - **Reframe** (5 min): A misconception about [TOPIC] or about how sensors work - **Assemble** (25-30 min): I Do → We Do → You Do with actual micro:bit code - **Fortify** (10 min): Data verification + Check the Machine exercise for any AI-generated code - **Transfer** (5 min): Connection to next lesson + extension prompt Include working MakeCode JavaScript code for the core activity. Include a student-facing CtM prompt for verifying the code. End with 3 extension prompts students can paste into an LLM to keep building.
Write BBC micro:bit V2 MakeCode JavaScript that visualizes [SENSOR] data on the 5×5 LED matrix. Instead of just showing a number, create a visual representation: - Use LED brightness levels (0-255) to show magnitude - OR use a bar graph pattern (fill columns based on value) - OR use an animation that changes with the reading The visualization should: - Update every [N] seconds - Be readable from arm's length (a student holding the micro:bit) - Have a clear "high" and "low" state that students can distinguish Include comments explaining the math that maps sensor values to LED patterns. Also include the MicroPython equivalent.
Explain the difference between edge computing and cloud computing for [GRADE] students. Requirements: - Use the BBC micro:bit as the concrete example of an edge device - Compare it to a familiar cloud service (e.g., asking Siri or Google a question) - Include an analogy appropriate for this age group - Address the misconception that "IoT" always requires internet connectivity (the micro:bit processes data locally — that's the "edge" part) - Keep it under 200 words - End with a thought question students can discuss in pairs I'll use this as a 3-minute warm-up before our micro:bit coding session.
Draft a parent newsletter paragraph explaining that our class is starting a unit on [TOPIC]. Tone: Warm, professional, excited Length: 150 words max Include: What students will learn, one way parents can support at home, and a sentence about how we're using AI tools responsibly in class.
I have a [SUBJECT] lesson on [TOPIC] for [GRADE]. A student has the following accommodation: [DESCRIBE ACCOMMODATION] Suggest 3 specific ways to modify the lesson activities while maintaining rigor and access to the same core learning objective. Be practical — suggest modifications I can implement with the materials I already have.