AI-102 Generative AI

Use this only when the primary task is Azure OpenAI, prompting, embeddings, vector retrieval, or content filtering.

Exams
AI-102
Questions
35
Comments
268

1. AI-102 Topic 7 Question 2

Sequence
6
Discussion ID
134943
Source URL
https://www.examtopics.com/discussions/microsoft/view/134943-exam-ai-102-topic-7-question-2-discussion/
Posted By
audlindr
Posted At
Feb. 29, 2024, 9:43 p.m.

Question

You have an Azure subscription. The subscription contains an Azure OpenAI resource that hosts a GPT-3.5 Turbo model named Model1.

You configure Model1 to use the following system message: “You are an AI assistant that helps people solve mathematical puzzles. Explain your answers as if the request is by a 4-year-old.”

Which type of prompt engineering technique is this an example of?

  • A. few-shot learning
  • B. affordance
  • C. chain of thought
  • D. priming

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 16 comments Click to expand

Comment 1

ID: 1166322 User: River06 Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Tue 05 Mar 2024 09:58 Selected Answer: D Upvotes: 14

Priming is utilised in this example because it involves setting up the context or role of the AI model explicitly in a system message. It instructs the model about its role (“You are an AI assistant that helps people solve mathematical puzzles.") and also provides directions about how it should respond ("Explain your answers as if the requestor is a 4-year-old."). This essentially 'primes' the model for the conversation, as it lets the model know the expected behavior and persona that it needs to take on throughout the dialogue.

Comment 2

ID: 1241433 User: eskimolight Badges: Highly Voted Relative Date: 1 year, 8 months ago Absolute Date: Wed 03 Jul 2024 15:33 Selected Answer: - Upvotes: 6

Exam Question June 2024

Comment 3

ID: 1716321 User: travel007 Badges: Most Recent Relative Date: 2 weeks, 4 days ago Absolute Date: Sat 21 Feb 2026 18:15 Selected Answer: D Upvotes: 1

Answer D, asked in Feb 2026.

Comment 4

ID: 1235224 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:58 Selected Answer: D Upvotes: 2

I say this answer is D.

Comment 5

ID: 1230478 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 14:19 Selected Answer: D Upvotes: 2

D is correct answer.

Comment 6

ID: 1225594 User: demonite Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 17:18 Selected Answer: - Upvotes: 1

C is the answer : to explain it step by step
Not priming, that manipulates the output

Comment 7

ID: 1220969 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 15:16 Selected Answer: A Upvotes: 1

A. few-shot learning is right answer.

Comment 7.1

ID: 1245002 User: Toby86 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 09 Jul 2024 19:07 Selected Answer: - Upvotes: 1

Where is the example? Few shot is: 2+2=4, 5+5=10. What is 4+4?

Comment 8

ID: 1220674 User: Eskape Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 05:28 Selected Answer: - Upvotes: 2

A is the answer.
A common way to adapt language models to new tasks is to use few-shot learning. In few-shot learning, a set of training examples is provided as part of the prompt to give additional context to the model.

When using the Chat Completions API, a series of messages between the User and Assistant (written in the new prompt format), can serve as examples for few-shot learning. These examples can be used to prime the model to "respond in a certain way, emulate particular behaviors", and seed answers to common questions.

Comment 8.1

ID: 1245003 User: Toby86 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 09 Jul 2024 19:08 Selected Answer: - Upvotes: 1

Are we reading the same question? There are no examples.

Comment 9

ID: 1204131 User: bugimachi Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Mon 29 Apr 2024 19:02 Selected Answer: C Upvotes: 3

I'd go with "chain of thoughts", since it's about explaining the answers.
Priming is a different story: "This refers to including a few words or phrases at the end of the prompt to obtain a model response that follows the desired form. For example, using a cue such as “Here’s a bulleted list of key points:\n- ” can help make sure the output is formatted as a list of bullet points."
...that's not the case here!

https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions#prime-the-output

Comment 10

ID: 1199114 User: shorymor Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sat 20 Apr 2024 13:02 Selected Answer: D Upvotes: 3

Priming is the answer

https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions#system-message

Comment 11

ID: 1176212 User: chandiochan Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 18 Mar 2024 03:33 Selected Answer: D Upvotes: 1

Must be priming, here we setting the role for the AI model

Comment 12

ID: 1164566 User: warrior1234 Badges: - Relative Date: 2 years ago Absolute Date: Sun 03 Mar 2024 08:35 Selected Answer: - Upvotes: 2

D. Priming

Priming involves providing context or instructions to the model before it generates a response. In this case, the system message is priming the GPT-3.5 Turbo model by setting the expectation that it should provide explanations in a way that is understandable to a 4-year-old. This technique helps guide the model's behavior and output based on the given context or instruction.

Comment 13

ID: 1164244 User: Harry300 Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 18:32 Selected Answer: C Upvotes: 1

chain of thought. Try it on OpenAI. It explains step by step for a formular like 7+5*3+8

Comment 13.1

ID: 1279797 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 06 Sep 2024 22:31 Selected Answer: - Upvotes: 1

sir, really, no

2. AI-102 Topic 8 Question 2

Sequence
28
Discussion ID
149577
Source URL
https://www.examtopics.com/discussions/microsoft/view/149577-exam-ai-102-topic-8-question-2-discussion/
Posted By
jafaca
Posted At
Oct. 15, 2024, 7:05 p.m.

Question

You have an Azure subscription that contains an Azure AI Content Safety resource named CS1.

You create a test image that contains a circle.

You submit the test image to CS1 by using the curl command and the following command-line parameters.

image

What should you expect as the output?

  • A. 0
  • B. 0.0
  • C. 7
  • D. 100

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 1704049 User: zlxsea Badges: - Relative Date: 2 months ago Absolute Date: Mon 05 Jan 2026 12:38 Selected Answer: - Upvotes: 1

the answer is B
Output type: "EightSeverityLevels"
returns a floating-point severity score from 0.0 up to 7.0

Comment 2

ID: 1314250 User: 3fbc31b Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Mon 18 Nov 2024 22:51 Selected Answer: A Upvotes: 3

The severity levels are from 0 - 7, with 0 being the lowest, if any form of offensive material, and 7 being very naughty/offensive.

Given this, A is the answer.

Comment 3

ID: 1304222 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:35 Selected Answer: A Upvotes: 3

A is correct.

Comment 4

ID: 1298785 User: mrg998 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Wed 16 Oct 2024 16:45 Selected Answer: - Upvotes: 1

In this scenario, you are submitting an image to an Azure AI Content Safety resource for evaluation, specifically looking for violent content using the "EightSeverityLevels" output type. The image you submitted contains a simple circle, which likely doesn't depict any violent or harmful content.

Given that the image is non-violent, the expected severity level for violence should be the lowest possible score. In the "EightSeverityLevels" configuration, the scores range from 0 to 7, with 0 representing the least severe and 7 representing the most severe.

Thus, the correct output in this case would be:

A. 0

Comment 5

ID: 1298360 User: jafaca Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 15 Oct 2024 19:05 Selected Answer: A Upvotes: 2

0: Non-offensive
1-2: Very Low
3-4: Low
5-6: Moderate
7-8: High
9-10: Very High

Comment 5.1

ID: 1298363 User: jafaca Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 15 Oct 2024 19:09 Selected Answer: - Upvotes: 4

0: Non-offensive
1: Very Low
2: Low
3: Moderate-Low
4: Moderate
5: Moderate-High
6: High
7: Very High

3. AI-102 Topic 7 Question 4

Sequence
30
Discussion ID
135146
Source URL
https://www.examtopics.com/discussions/microsoft/view/135146-exam-ai-102-topic-7-question-4-discussion/
Posted By
chandiochan
Posted At
March 4, 2024, 4:56 a.m.

Question

You are building a chatbot for a travel agent. The chatbot will use the Azure OpenAI GPT 3.5 model and will be used to make travel reservations.

You need to maximize the accuracy of the responses from the chatbot.

What should you do?

  • A. Configure the model to include data from the travel agent's database.
  • B. Set the Top P parameter for the model to 0.
  • C. Set the Temperature parameter for the model to 0.
  • D. Modify the system message used by the model to specify that the answers must be accurate.

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 21 comments Click to expand

Comment 1

ID: 1173329 User: GHill1982 Badges: Highly Voted Relative Date: 1 year, 12 months ago Absolute Date: Thu 14 Mar 2024 12:20 Selected Answer: A Upvotes: 13

Could this not be A? Configuring the model to include data from the travel agent’s database would allow the chatbot to access a source of relevant and specific information, which can significantly improve the accuracy of its responses. Fine-tuning the model with data specific to the travel agent’s services and customer interactions can lead to more precise and contextually appropriate answers.

Comment 2

ID: 1291062 User: RajuTS Badges: Highly Voted Relative Date: 1 year, 5 months ago Absolute Date: Sun 29 Sep 2024 11:48 Selected Answer: C Upvotes: 10

The question s about how to ensure the model responses are accurate and not randomly made up creatively

Hence the answer is Option C to set the lower temperature that can ensure model is less creative and more factual

Comment 3

ID: 1703535 User: zlxsea Badges: Most Recent Relative Date: 2 months, 1 week ago Absolute Date: Sun 04 Jan 2026 03:54 Selected Answer: C Upvotes: 1

用于预订、规则性强、不能出错的场景(如旅行预订)时,Temperature = 0 是标准做法

Comment 4

ID: 1608419 User: bannamk Badges: - Relative Date: 6 months ago Absolute Date: Fri 12 Sep 2025 08:10 Selected Answer: C Upvotes: 4

C is the answer. Temperature = 0 makes the model deterministic and focused, reducing randomness and ensuring it picks the most likely response. This is ideal when accuracy and consistency are critical, such as in travel bookings. Why not A? A. Configure the model to include data from the travel agent's database
This is useful for relevance, but GPT-3.5 cannot directly access external databases unless integrated via RAG or tools. It doesn't maximize accuracy by itself.

Comment 5

ID: 1607760 User: uncledana Badges: - Relative Date: 6 months ago Absolute Date: Wed 10 Sep 2025 09:37 Selected Answer: C Upvotes: 1

Option C to set the lower temperature that can ensure model is less creative and more factual based on the data available

Comment 6

ID: 1607593 User: waqy Badges: - Relative Date: 6 months ago Absolute Date: Tue 09 Sep 2025 22:19 Selected Answer: C Upvotes: 1

Lowering temperature to 0 makes outputs deterministic and reduces creative variance—best for maximizing accuracy/consistency in transactional tasks like reservations.

Comment 7

ID: 1565492 User: marcellov Badges: - Relative Date: 10 months, 2 weeks ago Absolute Date: Thu 01 May 2025 20:31 Selected Answer: C Upvotes: 2

C. Set the Temperature parameter for the model to 0

To maximize response accuracy for travel reservations, lower the Temperature to 0, which forces the model to always select the most probable next token (deterministic output). This reduces creativity and increases consistency, critical for factual tasks like bookings.

Key Rationale:
Database integration (A) enhances relevance but requires RAG/plugins and doesn’t directly optimize baseline model accuracy.
Top P 0 (B) is invalid (Top P ranges from 0.1 to 1.0), and even low Top P values allow some variability.
Temperature 0 (C): Eliminates randomness, ensuring the model adheres to the most statistically likely responses.
System message (D) helps guide responses but doesn’t inherently improve factual accuracy without parameter tuning.

Comment 8

ID: 1400160 User: Mattt Badges: - Relative Date: 11 months, 4 weeks ago Absolute Date: Tue 18 Mar 2025 14:29 Selected Answer: C Upvotes: 4

The GPT-3.5 model does not have direct access to external databases.

Comment 8.1

ID: 1564564 User: tech_rum Badges: - Relative Date: 10 months, 2 weeks ago Absolute Date: Tue 29 Apr 2025 03:49 Selected Answer: - Upvotes: 2

Answer should be -C (set temperature to 0) - This would Improve output determinism and accuracy.

Comment 9

ID: 1338777 User: FatFatSam Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Fri 10 Jan 2025 10:29 Selected Answer: C Upvotes: 5

A is not correct. There is no configuration available to include data in database. You have to use RAG or Fine Tune the mode.
B setting Top p to zero will make it more unpredictable.
D Modify sys msg make affect the way model generate response
C setting temperature to zero make the model less random. so The answer is C

Comment 10

ID: 1331011 User: MASANASA Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Tue 24 Dec 2024 04:13 Selected Answer: A Upvotes: 1

I say A is the right answear.
If we look at Q, it says “maximize the accuracy of the response”. Temperature parameter will effect but it isn’t maximize the response. I believe Data fuel AI, and will maximize the response.

Comment 11

ID: 1323197 User: friendlyvlad Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 07 Dec 2024 17:30 Selected Answer: C Upvotes: 3

Adding more data as in A will not necessarily improve the accuracy of the responses

Comment 12

ID: 1273615 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 19:35 Selected Answer: A Upvotes: 3

Im laughing out loud by the ones who say D. Seriously? Ever worked with openai?

Comment 13

ID: 1246780 User: krzkrzkra Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 12 Jul 2024 15:30 Selected Answer: A Upvotes: 1

Selected Answer: A

Comment 14

ID: 1230477 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 14:19 Selected Answer: A Upvotes: 1

A is correct answer.

Comment 15

ID: 1228959 User: etellez Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 12 Jun 2024 13:05 Selected Answer: - Upvotes: 4

GitHub Copilot has the answer Set Temperature to 0

To maximize the accuracy of the responses from the chatbot, you should:

C. Set the Temperature parameter for the model to 0.

The temperature parameter controls the randomness of the model's output. A lower temperature will cause the model to make the most likely prediction at each step, making the output more deterministic and focused, thus potentially increasing the accuracy of the responses.

Comment 15.1

ID: 1234359 User: exnaniantwort Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 13:11 Selected Answer: - Upvotes: 2

Temperature:
Range: Typically between 0 and 1
Implications:
Lower values (close to 0): The model output is more deterministic, meaning it tends to produce more predictable and repetitive text. Useful for tasks requiring high precision.

Higher precision doesn't mean higher accuracy

Comment 16

ID: 1220323 User: anto69 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:12 Selected Answer: D Upvotes: 1

Accuracy is granted by initial prompt. D

Comment 16.1

ID: 1275376 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sat 31 Aug 2024 07:06 Selected Answer: - Upvotes: 1

No it isn't dude, lmao. GPT halucinate like crazy, only way to prevent this is present data that it can use to answer.

Comment 16.1.1

ID: 1279799 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 06 Sep 2024 22:34 Selected Answer: - Upvotes: 2

RAG yes, but please not fine tuning. That cannot be the answer

Comment 17

ID: 1217731 User: vovap0vovap Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 20:48 Selected Answer: - Upvotes: 1

It worse nothing, that even MS labs is used travel agency staff grounding information for a models.

4. AI-102 Topic 7 Question 6

Sequence
31
Discussion ID
135215
Source URL
https://www.examtopics.com/discussions/microsoft/view/135215-exam-ai-102-topic-7-question-6-discussion/
Posted By
chandiochan
Posted At
March 5, 2024, 4:36 a.m.

Question

HOTSPOT -

You have an Azure subscription that contains an Azure OpenAI resource named AI1.

You build a chatbot that will use AI1 to provide generative answers to specific questions.

You need to ensure that the responses are more creative and less deterministic.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 23 comments Click to expand

Comment 1

ID: 1173670 User: GHill1982 Badges: Highly Voted Relative Date: 1 year, 12 months ago Absolute Date: Thu 14 Mar 2024 20:51 Selected Answer: - Upvotes: 16

The ChatRole that should be used is ChatRole.User. This role is assigned to the messages that come from the user, which the chatbot is responding to. The Temperature setting can be adjusted to increase creativity in the responses.

Comment 1.1

ID: 1575338 User: vovap0vovap Badges: - Relative Date: 9 months, 1 week ago Absolute Date: Fri 06 Jun 2025 16:05 Selected Answer: - Upvotes: 1

In code example it is sending empty prompt. No reason to do that for user role.

Comment 1.2

ID: 1323211 User: friendlyvlad Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 07 Dec 2024 17:55 Selected Answer: - Upvotes: 5

The ChatRole.User specifies that the message is coming from the user in the conversation, while ChatRole.Assistant indicates that the message is coming from the AI assistant. The question clearly states that we need to modify output. In addition, the temperature parameter is primarily used to control the randomness of the model's generated response, which is associated with the ChatRole.Assistant
1. ChatRole.Assistant
2. Temperature

Comment 2

ID: 1230530 User: NagaoShingo Badges: Highly Voted Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 16:22 Selected Answer: - Upvotes: 6

1. user
2. temperature

Comment 3

ID: 1703540 User: zlxsea Badges: Most Recent Relative Date: 2 months, 1 week ago Absolute Date: Sun 04 Jan 2026 04:13 Selected Answer: - Upvotes: 1

ChatRole.System
Temperature
代码中 new ChatMessage(..., @"") 里的字符串是空的 (@"")。
如果是 User,通常这里会是一个变量(如 userInput)。
如果是 System,在模板代码中经常先预留一个位置(空字符串或占位符)来设定 AI 的身份。

Comment 4

ID: 1575336 User: vovap0vovap Badges: - Relative Date: 9 months, 1 week ago Absolute Date: Fri 06 Jun 2025 16:03 Selected Answer: - Upvotes: 3

1. System
2. Temperature
There is no any reason to send >empty< user prompt to a model, neither assistant one.
To a sense, sending empty system prompt will randomize results, though that is not really useful thing.

Comment 4.1

ID: 1606923 User: cesaraldo Badges: - Relative Date: 6 months ago Absolute Date: Sun 07 Sep 2025 12:13 Selected Answer: - Upvotes: 1

The question is badly formulated. Without context, an empty prompt means that no ChatRole will make any significant impact.

Comment 5

ID: 1312668 User: Alan_CA Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 15 Nov 2024 15:37 Selected Answer: - Upvotes: 1

An example given by Copilot :
{"role": "system", "content": "temperature": 0.7}

Comment 6

ID: 1282511 User: csdodo Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 12 Sep 2024 09:15 Selected Answer: - Upvotes: 2

ChatRole.User represents generating a response without being prompted by a speaker, making the AI more creative and less deterministic, allowing it more freedom to express itself.

Comment 7

ID: 1273618 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 19:41 Selected Answer: - Upvotes: 5

You are not setting the temp for user, that doesn't make sense. People please think a bit and dont trust on stupid responses from obvious dumb chatbots.

System is what you're trying to configure.

Comment 7.1

ID: 1279802 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 06 Sep 2024 22:43 Selected Answer: - Upvotes: 2

Sir, you are not setting temperature to a role, but to the request itself.

Comment 7.1.1

ID: 1286703 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Fri 20 Sep 2024 09:27 Selected Answer: - Upvotes: 1

this is right, you are setting the temp for a response.
To test this do a restAPI call to azureOpen AI instance, even when posting a the user prompt, you get an option for temperture in the body.

Comment 8

ID: 1271950 User: Moneybing Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 25 Aug 2024 02:32 Selected Answer: - Upvotes: 1

Copilot says ChatRole.User.

"To make the responses more creative and less deterministic, you should set the ChatRole.User in your coding. By doing so, you allow the chatbot to generate more imaginative and varied answers, as it will treat the user input as a prompt for creative responses. This approach encourages the chatbot to think beyond predefined patterns and produce more engaging content"

Comment 9

ID: 1265120 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 13:16 Selected Answer: - Upvotes: 5

Should be System since we're modifying the behavior of the Bot. So we start again defining the System message that must be defined only one time

Comment 10

ID: 1251674 User: nithin_reddy Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Sat 20 Jul 2024 11:55 Selected Answer: - Upvotes: 1

Right answer is ChatRole.Assistant and Temparature, the settings are on responses from chatbot

Comment 11

ID: 1231369 User: fba825b Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 16 Jun 2024 14:42 Selected Answer: - Upvotes: 2

First would be ChatRole.System as it is the first chat in the ChatMessage object and the second answer is Temperature

Comment 11.1

ID: 1231371 User: fba825b Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 16 Jun 2024 14:49 Selected Answer: - Upvotes: 2

Mistake! I actually think it should be ChatRole.User as we are then awaiting for the answer from the LLM (assistant)

Comment 12

ID: 1224114 User: formacionkiteris Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 04 Jun 2024 14:11 Selected Answer: - Upvotes: 4

First Option should be ChatRole.System.

"Typical usage begins with a chat message for the System role that provides instructions for the behavior of the assistant followed by alternating messages between the User role and Assistant role."


https://learn.microsoft.com/en-us/dotnet/api/azure.ai.openai.chatcompletionsoptions.messages?view=azure-dotnet-preview#remarks

Comment 13

ID: 1218405 User: vovap0vovap Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 17:17 Selected Answer: - Upvotes: 3

Well, I think that should be ChatRole.System
ChatRole.User naturally make no sense - that user request

Comment 14

ID: 1211910 User: Dhibi_111 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 15 May 2024 13:39 Selected Answer: - Upvotes: 4

Selecting ChatRole.User as one of the options would imply that the responses should be generated based on the user's input or perspective. However, in this scenario, the goal is to ensure that the responses from the chatbot are more creative and less deterministic. Including ChatRole.User might limit the creativity of the responses because the model would primarily consider the user's input rather than generating novel content.

By primarily focusing on ChatRole.Assistant, the responses will be predominantly generated from the perspective of the AI assistant, allowing for more creative and varied outputs. This approach ensures that the chatbot's responses are not overly influenced by the user's input, leading to more diverse and imaginative answers.

Comment 15

ID: 1199112 User: shorymor Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sat 20 Apr 2024 13:00 Selected Answer: - Upvotes: 3

ChatRole.User (Assuming the input is an actual message from user)
Temperature

Documentation/Examples: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/reproducible-output?tabs=pyton

Comment 16

ID: 1186338 User: f2c587e Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 30 Mar 2024 20:30 Selected Answer: - Upvotes: 1

D,C. Con rol de user el chat es menos formal. con temperatura se ajusta la creatividad en las respuestas.

Comment 17

ID: 1185169 User: chandiochan Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 29 Mar 2024 03:24 Selected Answer: - Upvotes: 2

Yes, if the content you're inserting is the actual user message, then you should use ChatRole.User. This role signifies that the message is coming from the user, as opposed to the assistant, which would be generating the response.

5. AI-102 Topic 8 Question 8

Sequence
34
Discussion ID
150482
Source URL
https://www.examtopics.com/discussions/microsoft/view/150482-exam-ai-102-topic-8-question-8-discussion/
Posted By
a8da4af
Posted At
Oct. 29, 2024, 11:09 p.m.

Question

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure subscription that contains an Azure OpenAI resource named AI1 and an Azure AI Content Safety resource named CS1.

You build a chatbot that uses AI1 to provide generative answers to specific questions and CS1 to check input and output for objectionable content.

You need to optimize the content filter configurations by running tests on sample questions.

Solution: From Content Safety Studio, you use the Safety metaprompt feature to run the tests.

Does this meet the requirement?

  • A. Yes
  • B. No

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 4 comments Click to expand

Comment 1

ID: 1701154 User: Mo2010 Badges: - Relative Date: 2 months, 2 weeks ago Absolute Date: Tue 23 Dec 2025 01:04 Selected Answer: A Upvotes: 1

Yes is CORRECT. The Safety metaprompt feature in Content Safety Studio.

Comment 2

ID: 1400444 User: Mattt Badges: - Relative Date: 11 months, 4 weeks ago Absolute Date: Wed 19 Mar 2025 10:39 Selected Answer: B Upvotes: 1

The Safety metaprompt feature in Content Safety Studio is used to guide AI models in generating safer responses by providing safety-related instructions within prompts. However, it is not designed for testing and optimizing content filter configurations by analyzing sample inputs and outputs for objectionable content.

Comment 3

ID: 1323094 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 07 Dec 2024 13:18 Selected Answer: B Upvotes: 1

https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview#product-features

Comment 4

ID: 1304731 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 23:09 Selected Answer: B Upvotes: 3

The answer is B. No.

Explanation:

The Safety metaprompt feature is generally used to enhance safety by adding specific guidance to the model’s responses. However, it is not designed for directly testing or optimizing content filter configurations with sample questions in Content Safety Studio.

To meet the requirement of testing content filters on sample questions, you should use the Moderate text content feature, as it specifically allows you to check for objectionable content in text input and output. Therefore, using the Safety metaprompt feature does not fulfill the requirement.

6. AI-102 Topic 1 Question 60

Sequence
45
Discussion ID
135042
Source URL
https://www.examtopics.com/discussions/microsoft/view/135042-exam-ai-102-topic-1-question-60-discussion/
Posted By
-
Posted At
March 2, 2024, 9:28 a.m.

Question

HOTSPOT -

You have an Azure OpenAI resource named AI1 that hosts three deployments of the GPT 3.5 model. Each deployment is optimized for a unique workload.

You plan to deploy three apps. Each app will access AI1 by using the REST API and will use the deployment that was optimized for the app's intended workload.

You need to provide each app with access to AI1 and the appropriate deployment. The solution must ensure that only the apps can access AI1.

What should you use to provide access to AI1, and what should each app use to connect to its appropriate deployment? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 23 comments Click to expand

Comment 1

ID: 1164322 User: Harry300 Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 19:34 Selected Answer: - Upvotes: 32

API key
Deployment name (Different deployments can be configured on oai.azure.com)

Comment 1.1

ID: 1288005 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Mon 23 Sep 2024 08:11 Selected Answer: - Upvotes: 2

this is correct, a endpoint itself can have multiple deployments. You need to specify the deployment you require out of the three so it has to be deployment name.

Comment 1.1.1

ID: 1288006 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Mon 23 Sep 2024 08:13 Selected Answer: - Upvotes: 2

code example here shows deployment name and API key - https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart?tabs=command-line%2Ctypescript%2Cpython-new&pivots=programming-language-python

Comment 1.1.1.1

ID: 1322867 User: nkorczynski Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 18:38 Selected Answer: - Upvotes: 1

REST API reference:
With Azure OpenAI the model parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to "model": "{your-custom-model-deployment-name}".
https://learn.microsoft.com/en-us/azure/ai-services/openai/assistants-quickstart?tabs=command-line%2Cjavascript-keyless%2Ctypescript-keyless&pivots=rest-api

Comment 2

ID: 1182357 User: varinder82 Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 10:39 Selected Answer: - Upvotes: 8

Final Answer:
API Key
Deployment Name:

Comment 3

ID: 1698724 User: rbh1090 Badges: Most Recent Relative Date: 3 months ago Absolute Date: Thu 11 Dec 2025 03:57 Selected Answer: - Upvotes: 1

Why Bearer Token: This implies the use of Azure Active Directory (Microsoft Entra ID) authentication rather than a shared API Key.

- API Keys are shared secrets. If a key is leaked, anyone can access the resource. They do not strictly ensure "only the apps" have access.

- Bearer Tokens are used when authenticating via Managed Identities. You can assign an RBAC role (like Cognitive Services OpenAI User) specifically to the identity of each app. This ensures that access is strictly limited to those specific application identities, satisfying the security constraint.

Comment 4

ID: 1362508 User: avneetsingh_bungai Badges: - Relative Date: 1 year ago Absolute Date: Thu 27 Feb 2025 13:09 Selected Answer: - Upvotes: 4

Answer:
1 An API Key
2 A Deployment Name

Comment 5

ID: 1298965 User: Sujeeth Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Wed 16 Oct 2024 23:30 Selected Answer: - Upvotes: 2

API Key: Each app should use an API Key for secure access to the Azure OpenAI resource. API keys are commonly used to authenticate requests to Azure OpenAI resources.

Deployment name: Each app should specify the deployment name to connect to the appropriate GPT 3.5 deployment optimized for its unique workload.

The deployment name ensures the app uses the correct model version or configuration hosted in AI1

Comment 6

ID: 1275840 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 06:40 Selected Answer: - Upvotes: 4

Answer is api-key (no confusion) and deployment_name. The issue with deployment name is that it is meant to confuse. There are three deployments of the model. The endpoint of all three of them remain the same. I worked with this and multiple deployments all the time, but still I forgot that the endpoint for azure-openai is same for all deployments. Of course to call you need the endpoint and the deployment name both, but looks like Microsoft wants to confuse you with mentioning REST. Just sad this company does this
Answer is very clearly now: api-key and deployment_name (what microsoft wants to answer)

Comment 6.1

ID: 1275842 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 06:45 Selected Answer: - Upvotes: 2

But looking at the answer options it looks more confusing. They did not say just "endpoint" but said deployment-endpoint. In the azure openai studio the deployments mention "endpoint" as the target URL that includes the deployment name. So it is technically deployment endpoint. We have to check with a lawyer. What will I answer? Probably deployment endpoint (just random guess) but maybe they want you to know just that it is same endpoint for all the deployments. Still not clear

Comment 6.1.1

ID: 1275843 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 06:50 Selected Answer: - Upvotes: 1

OR maybe this microsoft employee, with his inexperience, might have thought that for the SDK use you need to give deployment name (in addition to key and resource endpoint), but for the REST API you need deployment-endpoint (endpoint + deployment-name) and can be copied from the azure-openai studio. So, did he think the right answer is deployment_endpoint ??

Comment 6.1.1.1

ID: 1275844 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 06:53 Selected Answer: - Upvotes: 1

Hoping it is the documentation that he read, and the documentation did not change from the time that guy looked at it while creating the question and now, I will still go for deployment_name (but that is NOT enough to make a rest call).

Comment 7

ID: 1275835 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 06:24 Selected Answer: - Upvotes: 2

I have years of experience using this, but I can't even understand the intent of the question. The build up is fine, but the question and the options make me really wonder. We use RBAC to connect in an enterprise environment.

Looks like the org that Microsoft outsourced this to (and getting money from them instead of paying) is asking the question on some image from REST api documentation. Guessing the low paid employee, I guess the answer he expects is :
api-key and end_point. REST api uses api-key as the header and also uses end-point to connect.

The scenario describing separate deployments does not make sense. An attempt at confusing the developer with experience from the real world. Nice work, vendor.

Comment 8

ID: 1264957 User: shanakrs Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 05:45 Selected Answer: - Upvotes: 1

Answer is
An API Key
A Deployment Endpoint

https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/switching-endpoints

Note: There are key requirements in the questions ("Each app will access AI1 by using the REST API"). With this, apps will connect AI1 with REST API calls and to connect azure open ai with rest api, it is recommended to use API KEY and ENDPOINT as mentioned in above article.

Comment 9

ID: 1263288 User: Sylviaaaaaaa Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Sat 10 Aug 2024 04:31 Selected Answer: - Upvotes: 4

API key and deployment name.
https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart?tabs=command-line%2Cpython-new&pivots=programming-language-python

Comment 10

ID: 1248783 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 16 Jul 2024 11:02 Selected Answer: - Upvotes: 1

An API key
A deployment endpoint

Comment 11

ID: 1235385 User: yfontana Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 14:33 Selected Answer: - Upvotes: 5

The Azure portal and documentation consistently talk about each Azure Open AI resource having a single endpoint, and each deployment having a name.

So the correct answer is most likely:
An API key
A deployment name

Comment 12

ID: 1230907 User: gary_cooper Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 15 Jun 2024 13:27 Selected Answer: - Upvotes: 1

An API key
A deployment endpoint

Comment 13

ID: 1221244 User: PeteColag Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 22:22 Selected Answer: - Upvotes: 1

While bearer tokens are commonly used for authentication, they are not specifically mentioned in the context of Azure OpenAI resource access. API keys are the recommended way to authenticate applications.

So, the answer should be API key and deployment endpoint.

Comment 14

ID: 1221072 User: fuck_india Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 17:01 Selected Answer: - Upvotes: 3

1. An API key
2. A deployment endpoint

Comment 15

ID: 1217345 User: funny_penguin Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 11:24 Selected Answer: - Upvotes: 3

on exam today, I selected Api key and deployment endpoint

Comment 16

ID: 1202665 User: Jimmy1017 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Fri 26 Apr 2024 16:53 Selected Answer: - Upvotes: 4

An API key
A deployment endpoint

Comment 17

ID: 1201249 User: wheebe Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 24 Apr 2024 11:43 Selected Answer: - Upvotes: 3

chatgpt's answer was the bearer token and deployment name

7. AI-102 Topic 7 Question 10

Sequence
62
Discussion ID
136127
Source URL
https://www.examtopics.com/discussions/microsoft/view/136127-exam-ai-102-topic-7-question-10-discussion/
Posted By
GHill1982
Posted At
March 15, 2024, 6:05 a.m.

Question

HOTSPOT
-

You have an Azure subscription that contains an Azure OpenAI resource named AI1.

You plan to develop a console app that will answer user questions.

You need to call AI1 and output the results to the console.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 6 comments Click to expand

Comment 1

ID: 1232810 User: rookiee1111 Badges: Highly Voted Relative Date: 1 year, 8 months ago Absolute Date: Wed 19 Jun 2024 11:46 Selected Answer: - Upvotes: 7

Given answer is correct
1) chatcompletion.create - this is the API for creating chat completion
2) response.choices[0].text - for picking the text with highest rated response choice

Comment 2

ID: 1606018 User: dgowrishankar Badges: Most Recent Relative Date: 6 months, 1 week ago Absolute Date: Thu 04 Sep 2025 05:25 Selected Answer: - Upvotes: 2

Given answer is correct. This question was asked in September 2025 exam

Comment 3

ID: 1275555 User: testmaillo020 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sat 31 Aug 2024 13:00 Selected Answer: - Upvotes: 1

Answer is correct

Comment 4

ID: 1189258 User: carcasgon Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 04 Apr 2024 13:16 Selected Answer: - Upvotes: 4

Correct. Under Completions > Example response
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#completions

Comment 5

ID: 1181180 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 23 Mar 2024 23:07 Selected Answer: - Upvotes: 3

# Output the response to the console
print(response.choices[0].message.content)

response = client.chat.completions.create(
The given answers are CORRECT

Comment 6

ID: 1174051 User: GHill1982 Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Fri 15 Mar 2024 06:05 Selected Answer: - Upvotes: 2

Answer is correct.

8. AI-102 Topic 7 Question 12

Sequence
63
Discussion ID
135219
Source URL
https://www.examtopics.com/discussions/microsoft/view/135219-exam-ai-102-topic-7-question-12-discussion/
Posted By
chandiochan
Posted At
March 5, 2024, 5:37 a.m.

Question

HOTSPOT
-

You have an Azure subscription.

You need to create a new resource that will generate fictional stories in response to user prompts. The solution must ensure that the resource uses a customer-managed key to protect data.

How should you complete the script? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 8 comments Click to expand

Comment 1

ID: 1221030 User: taiwan_is_not_china Badges: Highly Voted Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 16:21 Selected Answer: - Upvotes: 8

1. OpenAI
2. --encryption

Comment 1.1

ID: 1243858 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 07 Jul 2024 15:02 Selected Answer: - Upvotes: 1

Agree with you

Comment 2

ID: 1606019 User: dgowrishankar Badges: Most Recent Relative Date: 6 months, 1 week ago Absolute Date: Thu 04 Sep 2025 05:26 Selected Answer: - Upvotes: 2

1. Open AI
2. --encryption

This question was asked in September 2025 exam

Comment 3

ID: 1192419 User: ProfessorZ Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 09 Apr 2024 18:45 Selected Answer: - Upvotes: 4

--encryption is correct:

https://learn.microsoft.com/en-us/cli/azure/cognitiveservices/account?view=azure-cli-latest
az cognitiveservices account create -n myresource -g myResourceGroup --assign-identity --kind TextAnalytics --sku S -l WestEurope --yes
--encryption '{
"keySource": "Microsoft.KeyVault",
"keyVaultProperties": {
"keyName": "KeyName",
"keyVersion": "secretVersion",
"keyVaultUri": "https://issue23056kv.vault.azure.net/"
}
}'

Comment 4

ID: 1188293 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 02 Apr 2024 22:20 Selected Answer: - Upvotes: 3

az cognitiveservices account create \
--name myresource \
--resource-group myResourceGroup \
--kind OpenAI \
--sku S1 \
--location WestEurope \

Comment 5

ID: 1174054 User: GHill1982 Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Fri 15 Mar 2024 06:10 Selected Answer: - Upvotes: 2

Answer is correct.

Comment 6

ID: 1170519 User: Murtuza Badges: - Relative Date: 2 years ago Absolute Date: Sun 10 Mar 2024 19:07 Selected Answer: - Upvotes: 1

Azure OpenAI Service models can do everything from generating original stories to performing complex text an

Comment 7

ID: 1166222 User: chandiochan Badges: - Relative Date: 2 years ago Absolute Date: Tue 05 Mar 2024 05:37 Selected Answer: - Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/openai/encrypt-data-at-rest
https://learn.microsoft.com/en-us/cli/azure/cognitiveservices/account?view=azure-cli-latest#az-cognitiveservices-account-create

9. AI-102 Topic 7 Question 18

Sequence
66
Discussion ID
149574
Source URL
https://www.examtopics.com/discussions/microsoft/view/149574-exam-ai-102-topic-7-question-18-discussion/
Posted By
jafaca
Posted At
Oct. 15, 2024, 6:37 p.m.

Question

You have a custom Azure OpenAI model.

You have the files shown in the following table.

image

You need to prepare training data for the model by using the OpenAI CLI data preparation tool.

Which files can you upload to the tool?

  • A. File1.tsv only
  • B. File2.xml only
  • C. File3.pdf only
  • D. File4.xlsx only
  • E. File1.tsv and File4.xslx only
  • F. File1.tsv, File2.xml and File4.xslx only
  • G. File1.tsv, File2.xml, File3.pdf and File4.xslx

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 16 comments Click to expand

Comment 1

ID: 1298344 User: jafaca Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 15 Oct 2024 18:37 Selected Answer: A Upvotes: 10

tsv or csv only

Comment 1.1

ID: 1579770 User: StelSen Badges: - Relative Date: 8 months, 3 weeks ago Absolute Date: Mon 23 Jun 2025 07:17 Selected Answer: - Upvotes: 1

JSON also can...

Comment 2

ID: 1601542 User: anonymous0 Badges: Most Recent Relative Date: 6 months, 3 weeks ago Absolute Date: Fri 22 Aug 2025 16:59 Selected Answer: E Upvotes: 1

Maximum file size for assistants and fine-tuning 512 MB
200 MB via the Azure AI Foundry portal.
ref: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/quotas-limits

Comment 3

ID: 1579494 User: sema2232 Badges: - Relative Date: 8 months, 3 weeks ago Absolute Date: Sun 22 Jun 2025 06:59 Selected Answer: A Upvotes: 1

A correct

Comment 4

ID: 1578341 User: LarcAi_Training Badges: - Relative Date: 8 months, 3 weeks ago Absolute Date: Tue 17 Jun 2025 16:09 Selected Answer: A Upvotes: 1

200 MB is to big

Comment 5

ID: 1577859 User: 2315046 Badges: - Relative Date: 8 months, 4 weeks ago Absolute Date: Mon 16 Jun 2025 07:21 Selected Answer: A Upvotes: 1

At the moment, file uploads are limited to 100MB per file, so a 200MB .xlsx file exceeds the upload limit.

Comment 6

ID: 1563990 User: tech_rum Badges: - Relative Date: 10 months, 2 weeks ago Absolute Date: Sat 26 Apr 2025 23:08 Selected Answer: A Upvotes: 3

File Accepted? Reason
File1.tsv ✅ TSV is supported and file size is 80MB (acceptable).
File2.xml ❌ XML is NOT supported.
File3.pdf ❌ PDF is NOT supported.
File4.xlsx ❌ XLSX is supported format-wise, but the file is 200MB — too large for most uploads (Azure OpenAI max is often 100MB per file).

Comment 6.1

ID: 1575633 User: 2315046 Badges: - Relative Date: 9 months, 1 week ago Absolute Date: Sun 08 Jun 2025 04:58 Selected Answer: - Upvotes: 1

Agree:
No, you cannot upload a 200MB .xlsx file directly to the OpenAI CLI data preparation tool.

The maximum file size for uploads via the OpenAI CLI is typically 100MB for .jsonl files used in fine-tuning and batch processing 1.
While .xlsx is a supported input format, it must be converted to .jsonl, and the converted file must also stay within the 100MB limit.

Comment 7

ID: 1335114 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Wed 01 Jan 2025 06:33 Selected Answer: E Upvotes: 2

Option E

Comment 8

ID: 1331423 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Wed 25 Dec 2024 08:36 Selected Answer: E Upvotes: 3

Option E
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=azure-openai%2Ccompletionfinetuning%2Cpython-new&pivots=programming-language-studio#openai-cli-data-preparation-tool

Comment 9

ID: 1323441 User: Retep Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 08 Dec 2024 09:18 Selected Answer: E Upvotes: 4

This tool accepts files in the following data formats, if they contain a prompt and a completion column/key:

Comma-separated values (CSV)
Tab-separated values (TSV)
Microsoft Excel workbook (XLSX)
JavaScript Object Notation (JSON)
JSON Lines (JSONL)
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=azure-openai%2Ccompletionfinetuning%2Cpython-new&pivots=programming-language-studio#openai-cli-data-preparation-tool

Comment 10

ID: 1315863 User: Turst Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 21 Nov 2024 15:27 Selected Answer: - Upvotes: 2

Data must be in JSONL format. The OpenAI CLI data preparation tool helps to convert the data. It's clearly A and E => https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=azure-openai%2Ccompletionfinetuning%2Cpython-new&pivots=programming-language-studio#openai-cli-data-preparation-tool

Comment 10.1

ID: 1315864 User: Turst Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 21 Nov 2024 15:29 Selected Answer: - Upvotes: 2

Sorry! Its clearly E!

Comment 10.1.1

ID: 1329923 User: sudhav11n Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sat 21 Dec 2024 11:30 Selected Answer: - Upvotes: 1

yes xlsx is allowed format but size must be less than 200MB. So the answer is A.

Comment 11

ID: 1304300 User: e41f7aa Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 06:18 Selected Answer: - Upvotes: 2

Answer is A. The OpenAI CLI data preparation tool currently supports the following file formats:

JSONL: This is the preferred format for fine-tuning data. Each line in the file should be a valid JSON object representing a training example.
CSV: Comma-separated values files can be used, but they might require some preprocessing to convert them into the correct JSONL format.
TXT: Plain text files can be used, but they will likely need significant preprocessing to extract the relevant information and structure it as JSONL.
Therefore, the answer is A. File1.tsv only.

Comment 12

ID: 1304213 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:19 Selected Answer: E Upvotes: 3

Answer is E, File 1 and 4, according to this link, file format (whether a CSV, JSON, XLSX, TSV or JSONL) is allowed as long as it's represented as a JSONL training example of a prompt-completion pair.

10. AI-102 Topic 7 Question 3

Sequence
94
Discussion ID
134944
Source URL
https://www.examtopics.com/discussions/microsoft/view/134944-exam-ai-102-topic-7-question-3-discussion/
Posted By
audlindr
Posted At
Feb. 29, 2024, 9:45 p.m.

Question

HOTSPOT
-

You build a chatbot by using Azure OpenAI Studio.

You need to ensure that the responses are more deterministic and less creative.

Which two parameters should you configure? To answer, select the appropriate parameters in the answer area.

NOTE: Each correct answer is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 8 comments Click to expand

Comment 1

ID: 1170430 User: Murtuza Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Sun 10 Mar 2024 16:54 Selected Answer: - Upvotes: 8

Given answers are correct

Comment 2

ID: 1568751 User: MeanJean Badges: Most Recent Relative Date: 10 months ago Absolute Date: Wed 14 May 2025 02:05 Selected Answer: - Upvotes: 1

I would have picked Top P and Temperature as well.. but the recommendation is to configure one and leave the other as default value.

Comment 3

ID: 1308506 User: jolimon Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Thu 07 Nov 2024 19:44 Selected Answer: - Upvotes: 2

answer are correct

Comment 4

ID: 1235144 User: LM12 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 07:39 Selected Answer: - Upvotes: 3

was on exam 20.06.24

Comment 5

ID: 1234354 User: exnaniantwort Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 12:59 Selected Answer: - Upvotes: 4

temperature controls the "creativity" of the generated text, between 0 and 2. A higher temperature will result in more diverse and unexpected responses, while a lower temperature will result in more conservative and predictable responses. The default value for temperature is 1.0, but you can experiment with different values to see what works best for your use case. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.

top_p - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. 1 is the default value. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

Comment 6

ID: 1230531 User: NagaoShingo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 16:23 Selected Answer: - Upvotes: 3

1. Temperature
2. Top P

Comment 7

ID: 1221040 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 16:33 Selected Answer: - Upvotes: 2

Temperature
Top P

Comment 8

ID: 1206301 User: 9H3zmT6 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sat 04 May 2024 01:37 Selected Answer: - Upvotes: 3

To make the responses more deterministic and less creative, it is recommended to lower the values of Temperature and TopP.

11. AI-102 Topic 7 Question 8

Sequence
101
Discussion ID
135102
Source URL
https://www.examtopics.com/discussions/microsoft/view/135102-exam-ai-102-topic-7-question-8-discussion/
Posted By
Harry300
Posted At
March 3, 2024, 10:30 a.m.

Question

HOTSPOT
-

You have an Azure subscription that contains an Azure OpenAI resource named AI1.

You build a chatbot that will use AI1 to provide generative answers to specific questions.

You need to ensure that the responses are more creative and less deterministic.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 18 comments Click to expand

Comment 1

ID: 1185175 User: chandiochan Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Fri 29 Mar 2024 03:32 Selected Answer: - Upvotes: 14

I think this must be user instead of system role. ChatRole.User identifies the text as coming from the user in the conversation. This is important because the AI model will use this information to understand the context of the prompt and tailor its response accordingly.

Comment 2

ID: 1230529 User: NagaoShingo Badges: Highly Voted Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 16:21 Selected Answer: - Upvotes: 9

1. user
2. temperature

Comment 3

ID: 1410065 User: sooss Badges: Most Recent Relative Date: 11 months, 3 weeks ago Absolute Date: Tue 25 Mar 2025 15:26 Selected Answer: - Upvotes: 1

it can not be temperature since it is set to -1. Temperature ranges are 0 to 2. It is frequency_penalty o rpresence_penalty

Comment 3.1

ID: 1410066 User: sooss Badges: - Relative Date: 11 months, 3 weeks ago Absolute Date: Tue 25 Mar 2025 15:28 Selected Answer: - Upvotes: 2

I apologize. It is temperature. I confused equal to sign with -

Comment 4

ID: 1312671 User: Alan_CA Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 15 Nov 2024 15:45 Selected Answer: - Upvotes: 1

response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "bla bla bla"},
{"role": "user", "content": "bla bla bla"},
{"role": "system", "content": "temperature": 0.7}
] )

Comment 5

ID: 1286704 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Fri 20 Sep 2024 09:35 Selected Answer: - Upvotes: 2

its user not system, you set the temp of the response you want to recieve when user prompt is sent

Comment 6

ID: 1273758 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Wed 28 Aug 2024 05:55 Selected Answer: - Upvotes: 4

Usually you start with a system message and seeying that messages is still empty I would say system. But content being empty makes it hard to really determine. System is optional. So next would be user or assisant. If its a free chat than assistant wont be next, but User.

This is really a terrible question because we can have three options. You can set temp to any of them, it just doesn't make sense with system. Because messages is an array you can change the temp constantly. It makes the most sense with user i guess. But again, a terrible code example to be honest.

Comment 7

ID: 1251677 User: nithin_reddy Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Sat 20 Jul 2024 11:59 Selected Answer: - Upvotes: 2

Assistant and Temperature as per ChatGPT, I think that's right

Comment 8

ID: 1245256 User: 34c89bf Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Wed 10 Jul 2024 05:46 Selected Answer: - Upvotes: 1

system message is optional and the model’s behavior without a system message is likely to be similar to using a generic message such as "You are a helpful assistant." So, It should be system.
second one is temprecture

Comment 9

ID: 1202339 User: chandiochan Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Fri 26 Apr 2024 03:48 Selected Answer: - Upvotes: 4

In the role dropdown, you would select "user" because you are simulating a user's input to which the chatbot should respond. The API call will use this context to generate a response as if it were the assistant. You only need to specify the role for the input messages you're including in your API call, not the responses you're expecting to generate.

Comment 9.1

ID: 1217285 User: TJ001 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 09:47 Selected Answer: - Upvotes: 2

it is the user message for which response is generated by this API function. so i will go with user as well.. the deterministic and creativity is controller by temperature value low or high

Comment 9.1.1

ID: 1217290 User: TJ001 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 09:52 Selected Answer: - Upvotes: 2

the 'content ' is blank here adds to the confusion :( .. so how can it be blank being a user message.. so could be system as well just to prime with the temperature settings. please suggest if anyone has clarity

Comment 9.1.1.1

ID: 1245536 User: 34c89bf Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Wed 10 Jul 2024 16:32 Selected Answer: - Upvotes: 1

system message is optional and the model’s behavior without a system message is likely to be similar to using a generic message such as "You are a helpful assistant." So, It should be system.
second one is temprecture

Comment 10

ID: 1196139 User: shorymor Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 15 Apr 2024 19:06 Selected Answer: - Upvotes: 1

Microsoft Copilot seems to be clear on this one. Based on that, answer is correct

It's response about this question (I did not ask about temperature because that one is obvious)

o ensure that your chatbot’s responses are more creative and less deterministic, you should use the Chat.Role.Assistant. This role represents the AI language model (such as GPT-3.5 Turbo) and allows for imaginative and varied answers. By assigning messages to the assistant role, you encourage the model to generate creative and less predictable content.

Comment 11

ID: 1188275 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 02 Apr 2024 21:34 Selected Answer: - Upvotes: 3

This one is SYSTEM here and not user. Looking at the code
Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages.
https://platform.openai.com/docs/guides/text-generation/chat-completions-api

Comment 11.1

ID: 1231372 User: fba825b Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 16 Jun 2024 14:52 Selected Answer: - Upvotes: 1

True, but if we expect the answer from the AI language model, we should send to the model user input

Comment 12

ID: 1166194 User: chandiochan Badges: - Relative Date: 2 years ago Absolute Date: Tue 05 Mar 2024 04:34 Selected Answer: - Upvotes: 4

n this context, when configuring the chatbot's behavior, you would use the "system" role to provide initial settings or instructions that affect the overall behavior of the chatbot. This is not an actual message that's part of the conversation with the user but rather a directive to the AI on how to conduct itself in the conversation. For example, you might use this to set the chatbot's personality, instruct it to prioritize certain types of information, or follow specific conversational guidelines.

The "assistant" role, on the other hand, is used for messages that simulate responses from the AI as part of the conversation with the user. It represents the chatbot's side of the dialogue.

Since you want to ensure that the responses are more creative and less deterministic, and this is a setting affecting the AI's behavior, the "system" role is the correct choice. You might include instructions in the "system" message that tell the AI to be more creative or to use less strict adherence to certain conversational rules.

Comment 13

ID: 1164648 User: Harry300 Badges: - Relative Date: 2 years ago Absolute Date: Sun 03 Mar 2024 10:30 Selected Answer: - Upvotes: 4

Should be system / temp
Without a system prompt, the reponses are more creative

12. AI-102 Topic 8 Question 12

Sequence
110
Discussion ID
149971
Source URL
https://www.examtopics.com/discussions/microsoft/view/149971-exam-ai-102-topic-8-question-12-discussion/
Posted By
cf1086a
Posted At
Oct. 21, 2024, 5:21 p.m.

Question

You have an Azure subscription that contains an Azure OpenAI resource named AI1.

You build a chatbot that uses AI1 to provide generative answers to specific questions.

You need to ensure that questions intended to circumvent built-in safety features are blocked.

Which Azure AI Content Safety feature should you implement?

  • A. Monitor online activity
  • B. Jailbreak risk detection
  • C. Moderate text content
  • D. Protected material text detection

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 3 comments Click to expand

Comment 1

ID: 1324743 User: mbsff Badges: Highly Voted Relative Date: 1 year, 3 months ago Absolute Date: Tue 10 Dec 2024 21:38 Selected Answer: B Upvotes: 5

Now called Prompt shields for user Prompts
https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/jailbreak-detection#prompt-shields-for-user-prompts

Comment 1.1

ID: 1331755 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Thu 26 Dec 2024 02:52 Selected Answer: - Upvotes: 1

Azure OpenAI offers a feature called Prompt Shields (formerly known as Jailbreak Risk Detection) to protect against harmful or inappropriate content generated by AI models. This feature helps detect and mitigate risks associated with both User Prompt Attacks and Document Attacks

Comment 2

ID: 1366482 User: swap_c11 Badges: Most Recent Relative Date: 1 year ago Absolute Date: Sat 08 Mar 2025 05:03 Selected Answer: - Upvotes: 1

Update to prompt shield

13. AI-102 Topic 7 Question 13

Sequence
134
Discussion ID
143505
Source URL
https://www.examtopics.com/discussions/microsoft/view/143505-exam-ai-102-topic-7-question-13-discussion/
Posted By
anto69
Posted At
July 7, 2024, 3:05 p.m.

Question

HOTSPOT
-

You have a chatbot that uses Azure OpenAI to generate responses.

You need to upload company data by using Chat playground. The solution must ensure that the chatbot uses the data to answer user questions.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 7 comments Click to expand

Comment 1

ID: 1286795 User: mrg998 Badges: Highly Voted Relative Date: 1 year, 5 months ago Absolute Date: Fri 20 Sep 2024 13:39 Selected Answer: - Upvotes: 5

first one is chatCompletionsOptions()
second one is AzureChatExtensionConfiguration - see link here https://learn.microsoft.com/en-us/java/api/com.azure.ai.openai.models.azurechatextensionconfiguration?view=azure-java-preview. This details AzureChatExtensionConfiguration & ChatCompletionsOptions() the other methods just dont exist on the SDK

Comment 1.1

ID: 1319399 User: nastolgia Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 19:15 Selected Answer: - Upvotes: 4

No.AzureExtensionsOptions: A general container for extension configurations.
Extensions: A property within AzureExtensionsOptions that specifies the actual extensions to use (e.g., AzureCognitiveSearchChatExtensionConfiguration).

Right answer is:
AzureCognitiveSearchChatExtensionConfiguration

Comment 1.1.1

ID: 1328632 User: Alan_CA Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Wed 18 Dec 2024 18:15 Selected Answer: - Upvotes: 3

nastolgia is right. See :
https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.OpenAI/1.0.0-beta.8/index.html#use-your-own-data-with-azure-openai

Comment 2

ID: 1273627 User: JakeCallham Badges: Highly Voted Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 20:01 Selected Answer: - Upvotes: 5

answer is correct

Comment 2.1

ID: 1273763 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Wed 28 Aug 2024 06:08 Selected Answer: - Upvotes: 2

AzureChatExtentionsOptions is the rights answer

Comment 3

ID: 1341454 User: BenALGuhl Badges: Most Recent Relative Date: 1 year, 1 month ago Absolute Date: Thu 16 Jan 2025 06:58 Selected Answer: - Upvotes: 2

This was based on the beta version. Now with 2.1.0, the second selection would be completely different:
https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.OpenAI/2.1.0/index.html#use-your-own-data-with-azure-openai

options.AddDataSource(new AzureSearchChatDataSource()
{
Endpoint = new Uri("https://your-search-resource.search.windows.net"),
IndexName = "contoso-products-index",
Authentication = DataSourceAuthentication.FromApiKey(
Environment.GetEnvironmentVariable("OYD_SEARCH_KEY")),
});

Comment 4

ID: 1243859 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 07 Jul 2024 15:05 Selected Answer: - Upvotes: 3

Seems correct answer

14. AI-102 Topic 8 Question 5

Sequence
139
Discussion ID
149967
Source URL
https://www.examtopics.com/discussions/microsoft/view/149967-exam-ai-102-topic-8-question-5-discussion/
Posted By
cf1086a
Posted At
Oct. 21, 2024, 5:19 p.m.

Question

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure subscription that contains an Azure OpenAI resource named AI1 and an Azure AI Content Safety resource named CS1.

You build a chatbot that uses AI1 to provide generative answers to specific questions and CS1 to check input and output for objectionable content.

You need to optimize the content filter configurations by running tests on sample questions.

Solution: From Content Safety Studio, you use the Protected material detection feature to run the tests.

Does this meet the requirement?

  • A. Yes
  • B. No

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 1335116 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Wed 01 Jan 2025 06:59 Selected Answer: B Upvotes: 4

No, using the Protected material detection feature from Content Safety Studio does not meet the requirement. The Protected material detection feature is designed to identify and manage sensitive or protected material, but it is not specifically intended for optimizing content filter configurations for objectionable content in a chatbot.

To optimize the content filter configurations for objectionable content, you should use the Moderate text content feature in Content Safety Studio. This feature is specifically designed to help you run tests on sample questions and check for objectionable content, ensuring that your chatbot's input and output adhere to safety and quality standards.

Comment 2

ID: 1330063 User: Jegababu Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sat 21 Dec 2024 15:16 Selected Answer: B Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/protected-material?tabs=text#user-scenarios

Comment 3

ID: 1326210 User: Andriki Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Fri 13 Dec 2024 19:05 Selected Answer: B Upvotes: 3

But if using the true definition of objectionable (unpleasant or offensive) then the choice is No as this would be the text analyze api. Protected material is meant to block copyrighted material from being used in responses

Comment 4

ID: 1323091 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 07 Dec 2024 13:14 Selected Answer: B Upvotes: 3

https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview#product-features

Comment 5

ID: 1304728 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 23:06 Selected Answer: - Upvotes: 1

B is Correct , here's what chatGPT says:

The answer is B. No.

Here’s why:

The Protected material detection feature in Content Safety Studio is typically designed to detect and monitor sensitive or protected content, but it may not directly apply to optimizing the filter configurations for a chatbot. To test content filtering settings specifically for objectionable or harmful content in AI-generated responses, you would likely use Content Safety’s standard filtering features directly on the input and output of the chatbot, rather than the Protected material detection feature.

For optimizing content filters, you would test using sample questions and responses in Content Safety Studio’s primary filtering and moderation tools, where you can review and adjust configurations.

Comment 6

ID: 1303377 User: Slapp1n Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Sat 26 Oct 2024 20:18 Selected Answer: A Upvotes: 2

The answer should be Yes:

The solution involves using the "Protected material detection" feature from Content Safety Studio to optimize content filter configurations by running tests on sample questions. This approach meets the requirement for testing and optimizing content safety configurations for generative AI output, ensuring that objectionable content is properly detected and managed.

15. AI-102 Topic 8 Question 6

Sequence
140
Discussion ID
149968
Source URL
https://www.examtopics.com/discussions/microsoft/view/149968-exam-ai-102-topic-8-question-6-discussion/
Posted By
cf1086a
Posted At
Oct. 21, 2024, 5:19 p.m.

Question

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure subscription that contains an Azure OpenAI resource named AI1 and an Azure AI Content Safety resource named CS1.

You build a chatbot that uses AI1 to provide generative answers to specific questions and CS1 to check input and output for objectionable content.

You need to optimize the content filter configurations by running tests on sample questions.

Solution: From Content Safety Studio, you use the Moderate text content feature to run the tests.

Does this meet the requirement?

  • A. Yes
  • B. No

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 2 comments Click to expand

Comment 1

ID: 1335115 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Wed 01 Jan 2025 06:57 Selected Answer: A Upvotes: 2

Using the Moderate text content feature from Content Safety Studio to run tests on sample questions does meet the requirement. This approach allows you to analyze and optimize the content filter configurations effectively by ensuring that the input and output are checked for objectionable content. This helps in maintaining the quality and safety of the responses generated by the chatbot.

Comment 2

ID: 1304729 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 23:07 Selected Answer: - Upvotes: 4

The answer is A. Yes.

Explanation:

The Moderate text content feature in Content Safety Studio is designed to detect objectionable or harmful content in text. By using this feature, you can effectively test and optimize content filtering configurations by running sample questions and analyzing the responses for inappropriate content. This approach meets the requirement of optimizing the content filter configurations for your chatbot's input and output.

16. AI-102 Topic 8 Question 4

Sequence
142
Discussion ID
149966
Source URL
https://www.examtopics.com/discussions/microsoft/view/149966-exam-ai-102-topic-8-question-4-discussion/
Posted By
cf1086a
Posted At
Oct. 21, 2024, 5:19 p.m.

Question

HOTSPOT
-

You have an Azure subscription that contains an Azure AI Content Safety resource named CS1.

You need to call CS1 to identify whether a user request contains hateful language.

How should you complete the command? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 5 comments Click to expand

Comment 1

ID: 1315849 User: Turst Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 21 Nov 2024 14:41 Selected Answer: - Upvotes: 4

Answer is correct to https://learn.microsoft.com/en-us/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Crest Blocklist is used in the http body as blocklistnames

Comment 2

ID: 1313571 User: mrwiti Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 17 Nov 2024 15:33 Selected Answer: - Upvotes: 3

text:analyze - > https://learn.microsoft.com/en-us/azure/ai-services/content-safety/quickstart-text?tabs=visual-studio%2Cwindows&pivots=programming-language-rest#analyze-text-content

Comment 3

ID: 1304723 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 22:54 Selected Answer: - Upvotes: 4

I think it's correct, contentsafety and then text:analyze

Comment 4

ID: 1303530 User: 4670ccc Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Sun 27 Oct 2024 10:14 Selected Answer: - Upvotes: 2

2nd shloud be text/blocklists : https://learn.microsoft.com/en-us/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Crest

Comment 4.1

ID: 1331739 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Thu 26 Dec 2024 01:56 Selected Answer: - Upvotes: 1

No, you need the text analyze service.

17. AI-102 Topic 8 Question 11

Sequence
143
Discussion ID
149922
Source URL
https://www.examtopics.com/discussions/microsoft/view/149922-exam-ai-102-topic-8-question-11-discussion/
Posted By
cheetah313
Posted At
Oct. 21, 2024, 8:45 a.m.

Question

HOTSPOT
-

You have an Azure subscription that contains an Azure AI Content Safety resource named CS1.

You need to use the SDK to call CS1 to identify requests that contain harmful content.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 2 comments Click to expand

Comment 1

ID: 1304734 User: a8da4af Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 23:16 Selected Answer: - Upvotes: 5

Answer is ContentSafetyClient and AnalyzeTextOptions : https://learn.microsoft.com/en-us/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Crest

Comment 1.1

ID: 1331753 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Thu 26 Dec 2024 02:50 Selected Answer: - Upvotes: 3

Correct:
ContentSafetyClient
AnalyzeTextOptions

18. AI-102 Topic 7 Question 5

Sequence
145
Discussion ID
135054
Source URL
https://www.examtopics.com/discussions/microsoft/view/135054-exam-ai-102-topic-7-question-5-discussion/
Posted By
Delta64
Posted At
March 2, 2024, 3:06 p.m.

Question

You build a chatbot that uses the Azure OpenAI GPT 3.5 model.

You need to improve the quality of the responses from the chatbot. The solution must minimize development effort.

What are two ways to achieve the goal? Each correct answer presents a complete solution.

NOTE: Each correct answer is worth one point.

  • A. Fine-tune the model.
  • B. Provide grounding content.
  • C. Add sample request/response pairs.
  • D. Retrain the language model by using your own data.
  • E. Train a custom large language model (LLM).

Suggested Answer

BC

Answer Description Click to expand


Community Answer Votes

Comments 12 comments Click to expand

Comment 1

ID: 1166189 User: chandiochan Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Tue 05 Mar 2024 04:22 Selected Answer: - Upvotes: 16

Here are two ways to improve the quality of the responses from the chatbot with minimal development effort:

1. Provide grounding content: This involves feeding the chatbot with relevant domain-specific information and data. This can include documents, articles, FAQs, or any other content related to the chatbot's purpose. By providing this context, the chatbot can better understand the user's intent and respond in a more relevant and informative way.

2. Add sample request/response pairs: This technique involves providing the chatbot with a set of pre-defined questions and their corresponding answers. This helps the chatbot learn the conversation patterns and phrasing related to its specific domain. By analyzing these examples, the chatbot can improve its ability to generate natural and consistent responses to user queries.

Both options (A. Provide grounding content and C. Add sample request/response pairs) achieve the goal of improving response quality with minimal development effort, as they do not require extensive retraining or model building.

Therefore, the two correct answers are:

B. Provide grounding content.
C. Add sample request/response pairs.

Comment 2

ID: 1181082 User: Murtuza Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Sat 23 Mar 2024 19:19 Selected Answer: BC Upvotes: 8

Remember that while fine-tuning (A) and custom models (E) can yield high-quality results, they often require significant development effort and computational resources. In contrast, grounding content and sample pairs offer pragmatic improvements with minimal overhead

Comment 3

ID: 1331400 User: pabsinaz Badges: Most Recent Relative Date: 1 year, 2 months ago Absolute Date: Wed 25 Dec 2024 07:48 Selected Answer: BC Upvotes: 2

B and C because it is NOT talking about accuracy, instead is asking to improve the quality of the response

Comment 4

ID: 1273616 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 19:36 Selected Answer: BC Upvotes: 3

Least development effort is key here

Comment 5

ID: 1246781 User: krzkrzkra Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 12 Jul 2024 15:31 Selected Answer: BC Upvotes: 1

Selected Answer: BC

Comment 6

ID: 1245006 User: Toby86 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 09 Jul 2024 19:14 Selected Answer: - Upvotes: 1

All 4 answers work, while Training a custom LLM is the highest effort, so that probably isn't it.

Comment 7

ID: 1235222 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:57 Selected Answer: BC Upvotes: 2

I say this answer is B and C.

Comment 8

ID: 1230476 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 14:19 Selected Answer: BC Upvotes: 2

BC is correct answer.

Comment 9

ID: 1221638 User: anto69 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 30 May 2024 16:00 Selected Answer: BC Upvotes: 3

B and C for me too

Comment 10

ID: 1221039 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 16:33 Selected Answer: BC Upvotes: 2

B and C are right answer.

Comment 11

ID: 1176214 User: chandiochan Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 18 Mar 2024 03:34 Selected Answer: BC Upvotes: 4

Must be B & C

Comment 12

ID: 1164144 User: Delta64 Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 15:06 Selected Answer: - Upvotes: 4

GPT 4 Answered:

B. Provide grounding content.
C. Add sample request/response pairs.

19. AI-102 Topic 8 Question 10

Sequence
147
Discussion ID
150483
Source URL
https://www.examtopics.com/discussions/microsoft/view/150483-exam-ai-102-topic-8-question-10-discussion/
Posted By
a8da4af
Posted At
Oct. 29, 2024, 11:12 p.m.

Question

You have an Azure subscription that contains an Azure AI Content Safety resource named CS1.

You plan to build an app that will analyze user-generated documents and identify obscure offensive terms.

You need to create a dictionary that will contain the offensive terms. The solution must minimize development effort.

What should you use?

  • A. a text classifier
  • B. language detection
  • C. text moderation
  • D. a blocklist

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 2 comments Click to expand

Comment 1

ID: 1330710 User: kennynelcon Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Mon 23 Dec 2024 08:05 Selected Answer: D Upvotes: 4

To handle a custom dictionary of offensive terms in Azure AI Content Safety, the blocklist feature is designed for exactly that scenario. You can supply your own list of obscure or custom offensive terms as a blocklist, and Content Safety will flag or block content containing those terms.

Comment 2

ID: 1304733 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 23:12 Selected Answer: D Upvotes: 4

Blocklist is what you would use: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Crest

20. AI-102 Topic 8 Question 7

Sequence
150
Discussion ID
149969
Source URL
https://www.examtopics.com/discussions/microsoft/view/149969-exam-ai-102-topic-8-question-7-discussion/
Posted By
cf1086a
Posted At
Oct. 21, 2024, 5:19 p.m.

Question

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure subscription that contains an Azure OpenAI resource named AI1 and an Azure AI Content Safety resource named CS1.

You build a chatbot that uses AI1 to provide generative answers to specific questions and CS1 to check input and output for objectionable content.

You need to optimize the content filter configurations by running tests on sample questions.

Solution: From Content Safety Studio, you use the Monitor online activity feature to run the tests.

Does this meet the requirement?

  • A. Yes
  • B. No

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 2 comments Click to expand

Comment 1

ID: 1323089 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 07 Dec 2024 13:08 Selected Answer: B Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview#content-safety-studio-features

Comment 2

ID: 1304730 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 23:08 Selected Answer: - Upvotes: 1

The answer is B. No.

Explanation:

The Monitor online activity feature in Content Safety Studio is typically used for tracking and analyzing real-time online activities, which is not directly related to testing or optimizing content filtering configurations on sample questions. To optimize and test content filters for a chatbot, you would use features like Moderate text content that are specifically designed for detecting objectionable content in text inputs and outputs. Therefore, this solution does not meet the requirement.

21. AI-102 Topic 8 Question 9

Sequence
151
Discussion ID
149970
Source URL
https://www.examtopics.com/discussions/microsoft/view/149970-exam-ai-102-topic-8-question-9-discussion/
Posted By
cf1086a
Posted At
Oct. 21, 2024, 5:21 p.m.

Question

HOTSPOT
-

You have an Azure subscription that contains an Azure AI Content Safety resource.

You are building a social media app that will enable users to share images.

You need to configure the app to moderate inappropriate content uploaded by the users.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 2 comments Click to expand

Comment 1

ID: 1304732 User: a8da4af Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 23:11 Selected Answer: - Upvotes: 8

Answer is ContentSafetyClient and client.AnalyzeImage(request);

Comment 1.1

ID: 1323083 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 07 Dec 2024 12:51 Selected Answer: - Upvotes: 3

Correct, here is a link...

https://learn.microsoft.com/en-us/azure/ai-services/content-safety/quickstart-image?tabs=visual-studio%2Cwindows&pivots=programming-language-csharp

22. AI-102 Topic 7 Question 22

Sequence
152
Discussion ID
149575
Source URL
https://www.examtopics.com/discussions/microsoft/view/149575-exam-ai-102-topic-7-question-22-discussion/
Posted By
jafaca
Posted At
Oct. 15, 2024, 6:57 p.m.

Question

HOTSPOT
-

You have a chatbot that uses Azure OpenAI to generate responses.

You need to upload company data by using Chat playground. The solution must ensure that the chatbot uses the data to answer user questions.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 3 comments Click to expand

Comment 1

ID: 1298356 User: jafaca Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 15 Oct 2024 18:57 Selected Answer: - Upvotes: 1

Azure Document Intelligence for second, imho

Comment 1.1

ID: 1322866 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 18:38 Selected Answer: - Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/openai/use-your-data-quickstart?tabs=command-line%2Cjavascript-keyless%2Ctypescript-keyless%2Cpython&pivots=programming-language-python

Comment 1.2

ID: 1299191 User: jafaca Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Thu 17 Oct 2024 14:00 Selected Answer: - Upvotes: 9

It is AzureCognitiveSearch after all

23. AI-102 Topic 7 Question 23

Sequence
173
Discussion ID
149739
Source URL
https://www.examtopics.com/discussions/microsoft/view/149739-exam-ai-102-topic-7-question-23-discussion/
Posted By
Slapp1n
Posted At
Oct. 18, 2024, 12:12 p.m.

Question

HOTSPOT
-

You are building an app that will provide users with definitions of common AI terms.

You create the following Python code.

image

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 3 comments Click to expand

Comment 1

ID: 1315771 User: Turst Badges: Highly Voted Relative Date: 1 year, 3 months ago Absolute Date: Thu 21 Nov 2024 11:54 Selected Answer: - Upvotes: 10

The given Answer is correct. Try it in Chat Playground => N,Y,Y

Comment 2

ID: 1305355 User: Christian_garcia_martin Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Thu 31 Oct 2024 10:24 Selected Answer: - Upvotes: 4

chat GPT : N,Y,Y
MS copilot : Y,Y,Y
Rider copilot : YYY

Y,Y,Y democracy wins.

Comment 3

ID: 1299616 User: Slapp1n Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Fri 18 Oct 2024 12:12 Selected Answer: - Upvotes: 2

I believe this is Y,Y,N

24. AI-102 Topic 7 Question 14

Sequence
179
Discussion ID
143924
Source URL
https://www.examtopics.com/discussions/microsoft/view/143924-exam-ai-102-topic-7-question-14-discussion/
Posted By
fqc
Posted At
July 15, 2024, 12:20 p.m.

Question

HOTSPOT
-

You have an Azure subscription that is linked to a Microsoft Entra tenant. The subscription ID is x1xx11x1-x111-xxxx-xxxx-x1111xxx11x1 and the tenant ID is 1y1y1yyy-1y1y-y1y1-yy11-y1y1y11111y1.

The subscription contains an Azure OpenAI resource named OpenAI1 that has a primary API key of 1111a111a11a111aaa11a1a1a11a11aa. OpenAI1 has a deployment named embeddings1 that uses the text-embedding-ada-002 model.

You need to query OpenAI1 and retrieve embeddings for text input.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 6 comments Click to expand

Comment 1

ID: 1273592 User: bp_a_user Badges: Highly Voted Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 18:57 Selected Answer: - Upvotes: 6

1 - API key
2 - deployment name

see here: https://learn.microsoft.com/en-us/javascript/api/@azure/openai/openaiclient?view=azure-node-preview#@azure-openai-openaiclient-getembeddings

Comment 2

ID: 1308508 User: jolimon Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Thu 07 Nov 2024 19:53 Selected Answer: - Upvotes: 3

1. 1111a111a11a111aaa11a1a1a11a11aa
2. embeddings1
no doubts

Comment 3

ID: 1284643 User: 9c652a0 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Mon 16 Sep 2024 13:46 Selected Answer: - Upvotes: 4

1-APi key
2-deploymentname
The name of the model deployment (when using Azure OpenAI) or model name (when using non-Azure OpenAI) to use for this request.

Comment 4

ID: 1248253 User: fqc Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 15 Jul 2024 12:20 Selected Answer: - Upvotes: 2

1 - API key
2 - text-embeddings-ada-002

Comment 4.1

ID: 1265497 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Wed 14 Aug 2024 07:02 Selected Answer: - Upvotes: 2

Agree, API key and the embedding model name

Comment 4.1.1

ID: 1283522 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sat 14 Sep 2024 09:07 Selected Answer: - Upvotes: 2

wrong its 1 - API key
2 - deployment name

25. AI-102 Topic 7 Question 17

Sequence
183
Discussion ID
149920
Source URL
https://www.examtopics.com/discussions/microsoft/view/149920-exam-ai-102-topic-7-question-17-discussion/
Posted By
cheetah313
Posted At
Oct. 21, 2024, 8:19 a.m.

Question

You have an Azure OpenAI model.

You have 500 prompt-completion pairs that will be used as training data to fine-tune the model.

You need to prepare the training data.

Which format should you use for the training data file?

  • A. CSV
  • B. XML
  • C. JSONL
  • D. TSV

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 3 comments Click to expand

Comment 1

ID: 1305343 User: Christian_garcia_martin Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Thu 31 Oct 2024 10:05 Selected Answer: - Upvotes: 1

key word is "pair" , you know that's a key value format so JSONL

Comment 2

ID: 1304209 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:03 Selected Answer: C Upvotes: 1

https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning-functions

The correct answer is:

C. JSONL

Reasoning:
To fine-tune an Azure OpenAI model, the training data needs to be in JSONL (JSON Lines) format. Each line in the file represents a separate prompt-completion pair in JSON format. This format is required to ensure that Azure OpenAI correctly interprets each pair as an individual training instance.

Comment 3

ID: 1300779 User: cheetah313 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 21 Oct 2024 08:19 Selected Answer: C Upvotes: 1

JSONL.

26. AI-102 Topic 7 Question 19

Sequence
184
Discussion ID
150032
Source URL
https://www.examtopics.com/discussions/microsoft/view/150032-exam-ai-102-topic-7-question-19-discussion/
Posted By
Rubby
Posted At
Oct. 22, 2024, 1:59 a.m.

Question

You have an Azure subscription that contains an Azure OpenAI resource named OpenAI1 and a user named User1.

You need to ensure that User1 can upload datasets to OpenAI1 and finetune the existing models. The solution must follow the principle of least privilege.

Which role should you assign to User1?

  • A. Cognitive Services OpenAI Contributor
  • B. Cognitive Services Contributor
  • C. Cognitive Services OpenAI User
  • D. Contributor

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 2 comments Click to expand

Comment 1

ID: 1304214 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:21 Selected Answer: A Upvotes: 3

A, verified with chatGPT

The correct answer is:

A. Cognitive Services OpenAI Contributor

Reasoning: The Cognitive Services OpenAI Contributor role provides the necessary permissions to manage the Azure OpenAI resource, including uploading datasets and fine-tuning models. This role is specific to OpenAI and offers the least privilege required for User1 to complete the tasks without granting additional permissions that are unnecessary for their role.

Comment 2

ID: 1301329 User: Rubby Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 22 Oct 2024 01:59 Selected Answer: A Upvotes: 3

A is correct. Cognitive Services OpenAI Contributor permission: Upload datasets for fine-tuning.
Ref:https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor

27. AI-102 Topic 7 Question 20

Sequence
185
Discussion ID
150420
Source URL
https://www.examtopics.com/discussions/microsoft/view/150420-exam-ai-102-topic-7-question-20-discussion/
Posted By
a8da4af
Posted At
Oct. 29, 2024, 12:23 a.m.

Question

You have an Azure subscription and 10,000 ASCII files.

You need to identify files that contain specific phrases. The solution must use cosine similarity.

Which Azure OpenAI model should you use?

  • A. text-embedding-ada-002
  • B. GPT-4
  • C. GPT-35 Turbo
  • D. GPT-4-32k

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 1 comment Click to expand

Comment 1

ID: 1304216 User: a8da4af Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:23 Selected Answer: A Upvotes: 6

Answer is A, models that use mathematical-based searches are always related to embeddings.

ChatGPT answer:
The correct answer is:

A. text-embedding-ada-002

Reasoning: To identify files that contain specific phrases using cosine similarity, you should use an embedding model. The text-embedding-ada-002 model is specifically designed to create embeddings (numerical representations of text) that can be used for similarity searches, such as cosine similarity. This model is efficient and optimized for tasks that involve finding similarity between pieces of text, making it ideal for identifying files with specific phrases.

The other options (GPT-4, GPT-3.5 Turbo, and GPT-4-32k) are primarily designed for text generation and do not specialize in embedding-based similarity tasks.

28. AI-102 Topic 7 Question 21

Sequence
186
Discussion ID
150033
Source URL
https://www.examtopics.com/discussions/microsoft/view/150033-exam-ai-102-topic-7-question-21-discussion/
Posted By
Rubby
Posted At
Oct. 22, 2024, 2:01 a.m.

Question

You have an Azure subscription that contains an Azure OpenAI resource named AI1 and a user named User1.

You need to ensure that User1 can perform the following actions in Azure OpenAI Studio:

• Identify resource endpoints.
• View models that are available for deployment.
• Generate text and images by using the deployed models.

The solution must follow the principle of least privilege.

Which role should you assign to User1?

  • A. Cognitive Services OpenAI User
  • B. Cognitive Services Contributor
  • C. Contributor
  • D. Cognitive Services OpenAI Contributor

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 2 comments Click to expand

Comment 1

ID: 1304218 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:25 Selected Answer: A Upvotes: 4

A. Cognitive Services OpenAI User

Reasoning: The Cognitive Services OpenAI User role provides read-only access to the Azure OpenAI resource, allowing User1 to:

Identify resource endpoints
View available models for deployment
Generate text and images using deployed models
This role follows the principle of least privilege by granting only the permissions needed to view and interact with deployed models, without allowing management or configuration changes. The other roles, such as Cognitive Services OpenAI Contributor and Cognitive Services Contributor, grant additional permissions that are unnecessary for User1’s requirements.

Comment 2

ID: 1301330 User: Rubby Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 22 Oct 2024 02:01 Selected Answer: - Upvotes: 1

A is correct. Cognitive Services OpenAI User permission: 1.View the resource endpoint under “Keys and Endpoint”. 2.View what models are available for deployment in Azure OpenAI Studio. 3. Use playground experiences with any models that have already been deployed to this Azure OpenAI resource
Ref:https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor

29. AI-102 Topic 8 Question 1

Sequence
187
Discussion ID
150421
Source URL
https://www.examtopics.com/discussions/microsoft/view/150421-exam-ai-102-topic-8-question-1-discussion/
Posted By
a8da4af
Posted At
Oct. 29, 2024, 12:34 a.m.

Question

You have an Azure subscription that contains an Azure OpenAI resource named AI1.

You build a chatbot that uses AI1 to provide generative answers to specific questions.

You need to ensure that the chatbot checks all input and output for objectionable content.

Which type of resource should you create first?

  • A. Microsoft Defender Threat Intelligence (Defender TI)
  • B. Azure AI Content Safety
  • C. Log Analytics
  • D. Azure Machine Leaning

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 1 comment Click to expand

Comment 1

ID: 1304221 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:34 Selected Answer: B Upvotes: 4

B is correct, chatGPT verified:

The correct answer is:

B. Azure AI Content Safety

Reasoning: To ensure that your chatbot checks all input and output for objectionable content, you should create an Azure AI Content Safety resource. This service is specifically designed to evaluate text and detect potentially harmful or objectionable content, making it ideal for your chatbot's needs. It can help ensure that the responses generated by AI1 are appropriate and meet safety standards.

The other options do not directly address the requirement of filtering objectionable content in AI-generated responses.

30. AI-102 Topic 7 Question 9

Sequence
205
Discussion ID
135064
Source URL
https://www.examtopics.com/discussions/microsoft/view/135064-exam-ai-102-topic-7-question-9-discussion/
Posted By
audlindr
Posted At
March 2, 2024, 7:05 p.m.

Question

HOTSPOT -

You have an Azure subscription that contains an Azure OpenAI resource.

You configure a model that has the following settings:

• Temperature: 1
• Top probabilities: 0.5
• Max response tokens: 100

You ask the model a question and receive the following response.

image

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 16 comments Click to expand

Comment 1

ID: 1234507 User: HaraTadahisa Badges: Highly Voted Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 16:53 Selected Answer: - Upvotes: 15

My answer is that
No
No
No

Comment 2

ID: 1230860 User: takaimomoGcup Badges: Highly Voted Relative Date: 1 year, 9 months ago Absolute Date: Sat 15 Jun 2024 10:58 Selected Answer: - Upvotes: 10

No
No
No

Comment 3

ID: 1275554 User: testmaillo020 Badges: Most Recent Relative Date: 1 year, 6 months ago Absolute Date: Sat 31 Aug 2024 12:58 Selected Answer: - Upvotes: 3

1. No: The total tokens used for the session include both the prompt tokens (37) and the completion tokens (86), totaling 123 tokens. Therefore, the subscription will be charged for 123 tokens, not just 86.
2. Yes: The Max response tokens were set to 100, and the completion used 86 tokens. The text completion was not truncated because the response did not exceed the maximum allowed tokens.
3. No: The prompt_tokens are not included in the Max response tokens value. The Max response tokens only refer to the tokens used in the model's response, not the tokens used in the input prompt.

Comment 3.1

ID: 1286726 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Fri 20 Sep 2024 10:41 Selected Answer: - Upvotes: 4

for Q2, should this not be NO, since you said "The Max response tokens were set to 100, and the completion used 86 tokens. The text completion was not truncated because the response did not exceed the maximum allowed tokens."

Comment 4

ID: 1266137 User: cloudrain Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 15 Aug 2024 04:27 Selected Answer: - Upvotes: 1

answer is correct.
3rd should be no because "Token costs are for both input and output. For example, suppose you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens." source below
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/manage-costs#understand-the-azure-openai-full-billing-model

Comment 4.1

ID: 1266138 User: cloudrain Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 15 Aug 2024 04:27 Selected Answer: - Upvotes: 2

meant to say 3rd should be Yes

Comment 5

ID: 1233072 User: etellez Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Wed 19 Jun 2024 21:03 Selected Answer: - Upvotes: 2

copilot says

The subscription will be charged 86 tokens for the execution of the session: Yes

The text completion was truncated because the Max response tokens value was exceeded: No

The prompt_tokens value will be included in the calculation of the Max response tokens value: Yes

Comment 6

ID: 1232809 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Wed 19 Jun 2024 11:43 Selected Answer: - Upvotes: 4

N/N/N
A - It takes into account the prompt tokes - hence as per the calculation -123 tokens should be charged
B - Text completion was not truncated, because the response token is 86 < 100
C - Prompt_tokens is not included in calculation max_response value.

Comment 7

ID: 1206772 User: michaelmorar Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sun 05 May 2024 05:49 Selected Answer: - Upvotes: 5

N - the subscription is NOT charged for 86 tokens - the response does not contain 86 tokens. For reference, each token is roughly four characters for typical English text.
N - text completion is clearly under 86 and the sentence is not truncated. the finish_reason here is "stop". If the prompt had been cut off, the finish_reason would have been 'length'
N - max tokens is the maximum number to generate in the COMPLETION.
The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).

Comment 8

ID: 1197088 User: tk1828 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 17 Apr 2024 10:21 Selected Answer: - Upvotes: 4

N/N/N
Subscriptions are charged for both the prompt and completion tokens.
Completion tokens is less than max response tokens.
It refers to max response tokens only, not max tokens.
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/manage-costs#understand-the-azure-openai-full-billing-model

Comment 9

ID: 1188283 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 02 Apr 2024 21:46 Selected Answer: - Upvotes: 1

The subscription will be charged 86 tokens for the execution of the session. Yes, that’s correct. The completion_tokens value represents the number of tokens in the model’s response, and this is what you’re billed for.
The text completion was truncated because the Max response tokens value was exceeded. No, that’s not correct. The response in this case wasn’t truncated. The max_tokens parameter sets a limit on the length of the generated response. If the model’s response had exceeded this limit, it would have been cut off, but in this case, the response is only 86 tokens long, which is less than the max_tokens value of 100.
The prompt_tokens value will be included in the calculation of the max_tokens value. Yes, that’s correct. The max_tokens parameter includes both the prompt tokens and the completion tokens. So if your prompt is very long, it could limit the length of the model’s response.

Comment 9.1

ID: 1284967 User: AzureGeek79 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 01:19 Selected Answer: - Upvotes: 1

that's correct. The answer is N, Y, Y.

Comment 10

ID: 1181178 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 23 Mar 2024 22:57 Selected Answer: - Upvotes: 3

The session execution consumed 86 tokens is NO it should be total of 123 tokens which includes the prompt tokens
The text completion was truncated due to exceeding the Max response tokens value is Yes
The prompt_tokens value is included in the calculation of the Max response tokens value is YES

Comment 11

ID: 1173689 User: GHill1982 Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Thu 14 Mar 2024 21:27 Selected Answer: - Upvotes: 3

I think it should be N/N/N.

Comment 11.1

ID: 1195783 User: GHill1982 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 15 Apr 2024 05:40 Selected Answer: - Upvotes: 2

Changing my mind to Y/N/N

The subscription will be charged 86 tokens for the execution of the session.
Yes, the subscription will be charged for the completion_tokens used during the execution, which in this case is 86 tokens.
The text completion was truncated because the Max response tokens value was exceeded.
No, the text completion was not truncated due to exceeding the Max response tokens value. The finish_reason is listed as “stop,” which indicates that the model stopped generating additional content because it reached a natural stopping point in the text, not because it hit the token limit.
The prompt_tokens value will be included in the calculation of the Max response tokens value.
No, the prompt_tokens value is not included in the calculation of the Max response tokens value. The Max response tokens setting only limits the length of the new content generated by the model in response to the prompt.

Comment 11.1.1

ID: 1200340 User: sergbs Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Mon 22 Apr 2024 20:34 Selected Answer: - Upvotes: 3

You are wrong. First No. Azure OpenAI base series and Codex series models are charged per 1,000 tokens. https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/manage-costs#understand-the-azure-openai-full-billing-model

31. AI-102 Topic 7 Question 7

Sequence
208
Discussion ID
135487
Source URL
https://www.examtopics.com/discussions/microsoft/view/135487-exam-ai-102-topic-7-question-7-discussion/
Posted By
witkor
Posted At
March 8, 2024, 10:41 a.m.

Question

DRAG DROP
-

You have an Azure subscription that contains an Azure OpenAI resource named AI1.

You plan to build an app named App1 that will write press releases by using AI1.

You need to deploy an Azure OpenAI model for App1. The solution must minimize development effort.

Which three actions should you perform in sequence in Azure OpenAI Studio? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Suggested Answer

image
Answer Description Click to expand


Comments 10 comments Click to expand

Comment 1

ID: 1231370 User: fba825b Badges: Highly Voted Relative Date: 1 year, 8 months ago Absolute Date: Sun 16 Jun 2024 14:45 Selected Answer: - Upvotes: 18

1. Create deployment using GPT-3.5-Turbo
2. Apply Market Writing Assistant template
3. Deploy solution to a new web app

Comment 1.1

ID: 1234351 User: exnaniantwort Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 12:55 Selected Answer: - Upvotes: 1

very clear

Comment 2

ID: 1168706 User: witkor Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Fri 08 Mar 2024 10:41 Selected Answer: - Upvotes: 5

To deploy an Azure OpenAI model for the app "Appl" with minimized development effort, follow these steps in Azure OpenAI Studio:

Create a deployment that uses the GPT-35 Turbo model.

Drag "Create a deployment that uses the GPT-35 Turbo model" to the answer area.

Apply the Default system message template.

Drag "Apply the Default system message template" to the answer area.

Deploy the solution to a new web app.

Drag "Deploy the solution to a new web app" to the answer area.

Comment 2.1

ID: 1173134 User: Mehe323 Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Thu 14 Mar 2024 06:22 Selected Answer: - Upvotes: 12

Considering the fact that the question is about press releases, I would choose the marketing writing assistant template, like in the answer.

Comment 2.1.1

ID: 1217279 User: TJ001 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 09:31 Selected Answer: - Upvotes: 1

https://microsoftlearning.github.io/mslearn-openai/Instructions/Exercises/01-get-started-azure-openai.html

Comment 3

ID: 1284807 User: famco Badges: Most Recent Relative Date: 1 year, 5 months ago Absolute Date: Mon 16 Sep 2024 16:59 Selected Answer: - Upvotes: 1

"Deploy the solution to a new web app" << That does not mean anything in this context. Pure nonsense. But I guess that is what I have to select considering this is MIcrosoft exam

Comment 4

ID: 1279803 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 06 Sep 2024 22:51 Selected Answer: - Upvotes: 1

"Deploy solution to a new web app" << What does that mean??
But because nothing else can be selected I have to select this.

Comment 5

ID: 1267918 User: JuneRain Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 18 Aug 2024 04:58 Selected Answer: - Upvotes: 2

This question was in the test I took in August 2024

Comment 6

ID: 1266176 User: anto69 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 15 Aug 2024 05:43 Selected Answer: - Upvotes: 1

1. Create deployment using GPT-3.5-Turbo
2. Apply Market Writing Assistant template
3. Deploy solution to a new web app

Comment 7

ID: 1170496 User: Murtuza Badges: - Relative Date: 2 years ago Absolute Date: Sun 10 Mar 2024 18:42 Selected Answer: - Upvotes: 5

In the Setup panel, under Use a system message template, select the Marketing Writing Assistant template and confirm that you want to update the system message.
https://microsoftlearning.github.io/mslearn-openai/Instructions/Exercises/01-get-started-azure-openai.html

32. AI-102 Topic 7 Question 16

Sequence
209
Discussion ID
143644
Source URL
https://www.examtopics.com/discussions/microsoft/view/143644-exam-ai-102-topic-7-question-16-discussion/
Posted By
Toby86
Posted At
July 9, 2024, 7:34 p.m.

Question

You have an Azure subscription.

You need to build an app that will compare documents for semantic similarity. The solution must meet the following requirements:

• Return numeric vectors that represent the tokens of each document.
• Minimize development effort.

Which Azure OpenAI model should you use?

  • A. GPT-3.5
  • B. GPT-4
  • C. embeddings
  • D. DALL-E

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 5 comments Click to expand

Comment 1

ID: 1284644 User: 9c652a0 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Mon 16 Sep 2024 13:47 Selected Answer: C Upvotes: 1

embeddings

Comment 2

ID: 1283524 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sat 14 Sep 2024 09:10 Selected Answer: - Upvotes: 1

c for sure

Comment 3

ID: 1275560 User: testmaillo020 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sat 31 Aug 2024 13:13 Selected Answer: - Upvotes: 2

embeddings correct

Comment 4

ID: 1271965 User: Moneybing Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 25 Aug 2024 03:24 Selected Answer: C Upvotes: 1

Copilot says Azure OpenAI Service embeddings.

Comment 5

ID: 1248262 User: fqc Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 15 Jul 2024 12:24 Selected Answer: - Upvotes: 3

correct

33. AI-102 Topic 1 Question 58

Sequence
282
Discussion ID
134922
Source URL
https://www.examtopics.com/discussions/microsoft/view/134922-exam-ai-102-topic-1-question-58-discussion/
Posted By
audlindr
Posted At
Feb. 29, 2024, 5:25 p.m.

Question

HOTSPOT
-

You plan to deploy an Azure OpenAI resource by using an Azure Resource Manager (ARM) template.

You need to ensure that the resource can respond to 600 requests per minute.

How should you complete the template? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 10 comments Click to expand

Comment 1

ID: 1170033 User: chandiochan Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Sun 10 Mar 2024 04:35 Selected Answer: - Upvotes: 38

Answer is correct:
Azure OpenAI allows you to manage how frequently your application can make inferencing requests. Your rate limits are based on Tokens-per-Minute (TPM). For example, if you have a capacity of 1, this equals 1,000 TPM, and the rate limit of requests you can make per minute (RPM) is calculated using a ratio. For every 1,000 TPM, you can make 6 RPM.

So, if you need to process 600 requests every minute, you'll require a TPM that supports that many RPM. Using the ratio, for 600 RPM, you need 100,000 TPM (because 600 divided by 6 equals 100, and 100 multiplied by 1,000 equals 100,000). In this scenario, you would set the capacity to 100, since each capacity unit equals 1,000 TPM.

Comment 2

ID: 1162787 User: audlindr Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Thu 29 Feb 2024 17:25 Selected Answer: - Upvotes: 13

Answer is correct
From here https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/quota?tabs=rest
When a deployment is created, the assigned TPM will directly map to the tokens-per-minute rate limit enforced on its inferencing requests. A Requests-Per-Minute (RPM) rate limit will also be enforced whose value is set proportionally to the TPM assignment using the following ratio:

6 RPM per 1000 TPM.

From here:
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/quota?tabs=rest
capacity integer This represents the amount of quota you are assigning to this deployment. A value of 1 equals 1,000 Tokens per Minute (TPM). A value of 10 equals 10k Tokens per Minute (TPM).

So the math is 600 RPM needs 100000 TPM which translates to capacity 100

Comment 2.1

ID: 1164262 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 18:44 Selected Answer: - Upvotes: 5

This was in 2-Mar-24 exam

Comment 3

ID: 1235233 User: HaraTadahisa Badges: Most Recent Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 09:05 Selected Answer: - Upvotes: 2

1. capacity
2. 100

Comment 4

ID: 1220874 User: PeteColag Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 13:41 Selected Answer: - Upvotes: 1

Parameter Type Description
sku Sku The resource model definition representing SKU.
capacity integer This represents the amount of quota you are assigning to this deployment. A value of 1 equals 1,000 Tokens per Minute (TPM). A value of 10 equals 10k Tokens per Minute (TPM).

Comment 5

ID: 1217597 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:37 Selected Answer: - Upvotes: 1

capacity and 100.

Comment 6

ID: 1217596 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:36 Selected Answer: - Upvotes: 1

capacity and 100.
From Takedajuku perspective, if you study for 4 days and spend 2 days reviewing, you will have a better chance of passing the exam.

Comment 7

ID: 1217343 User: funny_penguin Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 11:24 Selected Answer: - Upvotes: 2

on exam today 24/05/2024, I selected capacity and 600

Comment 8

ID: 1187855 User: NullVoider_0 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 02 Apr 2024 07:09 Selected Answer: - Upvotes: 3

The accurate capacity value to meet the requirement of processing 600 requests per minute is indeed 100. I apologize for any confusion caused earlier

Comment 9

ID: 1187854 User: NullVoider_0 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 02 Apr 2024 07:01 Selected Answer: - Upvotes: 2

In the provided options, the correct choice is:

"capacity" : 600

The capacity parameter in the SKU configuration of an Azure OpenAI resource specifies the maximum number of requests per minute (RPM) that the resource can handle. By setting "capacity": 600, you are configuring the resource to handle up to 600 requests per minute.

34. AI-102 Topic 7 Question 11

Sequence
299
Discussion ID
136128
Source URL
https://www.examtopics.com/discussions/microsoft/view/136128-exam-ai-102-topic-7-question-11-discussion/
Posted By
GHill1982
Posted At
March 15, 2024, 6:07 a.m.

Question

HOTSPOT
-

You have an Azure subscription that contains an Azure OpenAI resource named AI1.

You plan to develop a console app that will answer user questions.

You need to call AI1 and output the results to the console.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 4 comments Click to expand

Comment 1

ID: 1232811 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Wed 19 Jun 2024 11:48 Selected Answer: - Upvotes: 4

1. GetCompletions - this is the api for creating chat.
2. (response.Value.Choises[0].Text) - this will output the text from top rated response object.

Comment 2

ID: 1220972 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 15:18 Selected Answer: - Upvotes: 3

1. GetCompletions
2. (response.Value.Choises[0].Text);

Comment 3

ID: 1181544 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 24 Mar 2024 13:59 Selected Answer: - Upvotes: 4

Agree answer is correct

Comment 4

ID: 1174053 User: GHill1982 Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Fri 15 Mar 2024 06:07 Selected Answer: - Upvotes: 2

Answer is correct.

35. AI-102 Topic 1 Question 62

Sequence
311
Discussion ID
135059
Source URL
https://www.examtopics.com/discussions/microsoft/view/135059-exam-ai-102-topic-1-question-62-discussion/
Posted By
audlindr
Posted At
March 2, 2024, 6:45 p.m.

Question

You have an Azure OpenAI model named AI1.

You are building a web app named App1 by using the Azure OpenAI SDK.

You need to configure App1 to connect to AI1.

What information must you provide?

  • A. the endpoint, key, and model name
  • B. the deployment name, key, and model name
  • C. the deployment name, endpoint, and key
  • D. the endpoint, key, and model type

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 10 comments Click to expand

Comment 1

ID: 1225119 User: p2006 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 05:10 Selected Answer: C Upvotes: 2

"AI1" is deployment name.

https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart?tabs=command-line%2Cpython-new&pivots=rest-api#retrieve-key-and-endpoint

DEPLOYMENT-NAME This value will correspond to the custom name you chose for your deployment when you deployed a model.

Comment 2

ID: 1205019 User: anntv252 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 01 May 2024 12:53 Selected Answer: - Upvotes: 2

C. the deployment name, endpoint, and key .
Endpoint should include model name

Comment 3

ID: 1194119 User: michaelmorar Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 12 Apr 2024 05:41 Selected Answer: C Upvotes: 2

Correct, remember that Model Name and Type aren't relevant, we only need the deployment details and key.

Comment 4

ID: 1187868 User: NullVoider_0 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 02 Apr 2024 07:48 Selected Answer: C Upvotes: 2

Correct answer.

Comment 5

ID: 1183719 User: Mehe323 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 27 Mar 2024 00:45 Selected Answer: C Upvotes: 1

Correct.

When you access the model via the API you will need to refer to the deployment name rather than the underlying model name in API calls.
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal

Comment 5.1

ID: 1190207 User: Training Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 06 Apr 2024 05:16 Selected Answer: - Upvotes: 1

https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart?tabs=command-line%2Cpython-new&pivots=rest-api

Comment 6

ID: 1165948 User: GHill1982 Badges: - Relative Date: 2 years ago Absolute Date: Mon 04 Mar 2024 20:54 Selected Answer: C Upvotes: 4

To connect to an Azure OpenAI model using the Azure OpenAI SDK, you need to provide:
The deployment name of the model that you want to use. This is the name that you assigned to the model when you deployed it.
The endpoint of your Azure OpenAI resource. This is the URL that you can find in the Overview section of your resource in the Azure portal or by using the Azure CLI.
The key of your Azure OpenAI resource. This is the API key that you can find in the Keys and Endpoint section of your resource in the Azure portal or by using the Azure CLI.

Comment 6.1

ID: 1181165 User: AlviraTony Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 23 Mar 2024 22:35 Selected Answer: - Upvotes: 1

It is the name of the deployed model, that means it is "model name", not "deployment name"

Comment 6.1.1

ID: 1181166 User: AlviraTony Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 23 Mar 2024 22:36 Selected Answer: - Upvotes: 1

Ignore my comment

Comment 7

ID: 1164377 User: Razvan_C Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 22:43 Selected Answer: C Upvotes: 1

Answer seems to be correct