Question 1
What is a key advantage of integrating AI into productivity tools for end users?
Question 2
Which AWS service allows access to foundation models via API without infrastructure management?
Question 3
Why do AI service providers release multiple versions of the same model?
Question 4
What is prompt engineering?
Question 5
In many LLMs, what does 'temperature' control?
Question 6
Why do LLMs sometimes generate hallucinations? (Select all that apply)
Question 7
What is a common benefit of using AI assistants like Copilot?
Question 8
If your prompt exceeds the model's context window, what is most likely to happen?
Question 9
What is the best way to ask for structured output?
Question 10
In which scenario should a user avoid relying solely on Copilot output?
Question 11
Why should an AI be able to explain how it arrives at its results?
Question 12
In Microsoft Excel, how can Copilot help users analyze data?
Question 13
Which AWS service allows end users to interact with generative AI models through a managed interface?
Question 14
What does an LLM 'hallucination' typically mean?
Question 15
Which metrics are better suited for imbalanced datasets? (Select all that apply)
Question 16
Which Microsoft service enables end users to build AI-powered workflows without writing code?
Question 17
A strong 'role + task + context + constraints' prompt looks like:
Question 18
What is Microsoft Copilot mainly used for?
Question 19
What are the benefits of using RAG? (Select all that apply)
Question 20
An AI user designs prompts for a high stakes decision support assistant. Which technique best improves safety and reliability?