AI-102 Vision and Document AI

Group image, video, OCR, Face, and Document Intelligence questions together so closely related visual services stay in one memory bucket.

Exams
AI-102
Questions
97
Comments
1082

1. AI-102 Topic 2 Question 9

Sequence
3
Discussion ID
74741
Source URL
https://www.examtopics.com/discussions/microsoft/view/74741-exam-ai-102-topic-2-question-9-discussion/
Posted By
SamedKia
Posted At
April 28, 2022, 9:53 a.m.

Question

HOTSPOT -
You are building a model that will be used in an iOS app.
You have images of cats and dogs. Each image contains either a cat or a dog.
You need to use the Custom Vision service to detect whether the images is of a cat or a dog.
How should you configure the project in the Custom Vision portal? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand

Box 1: Classification -
Incorrect Answers:
An object detection project is for detecting which objects, if any, from a set of candidates are present in an image.

Box 2: Multiclass -
A multiclass classification project is for classifying images into a set of tags, or target labels. An image can be assigned to one tag only.
Incorrect Answers:
A multilabel classification project is similar, but each image can have multiple tags assigned to it.

Box 3: General -
General: Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains.
Reference:
https://cran.r-project.org/web/packages/AzureVision/vignettes/customvision.html

Comments 15 comments Click to expand

Comment 1

ID: 597547 User: dinhhungitsoft Badges: Highly Voted Relative Date: 3 years, 10 months ago Absolute Date: Fri 06 May 2022 06:23 Selected Answer: - Upvotes: 50

The third choice should be General compact, in other that the model can be exported to be used in iOS device

Comment 1.1

ID: 602601 User: g2000 Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Mon 16 May 2022 15:35 Selected Answer: - Upvotes: 5

it seems the general compact is for edge device not ios.
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/select-domain#image-classification

Comment 1.1.1

ID: 1719323 User: keambi0111 Badges: - Relative Date: 6 days, 19 hours ago Absolute Date: Thu 05 Mar 2026 20:06 Selected Answer: - Upvotes: 1

Custom Vision Service only exports projects with compact domains. The models generated by compact domains are optimized for the constraints of real-time classification on mobile devices. Classifiers built with a compact domain might be slightly less accurate than a standard domain with the same amount of training data.

https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/export-your-model

Comment 1.1.2

ID: 618555 User: sdokmak Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Sun 19 Jun 2022 08:34 Selected Answer: - Upvotes: 15

So general compact is correct since ios device is an edge device.

Comment 2

ID: 635094 User: Eltooth Badges: Highly Voted Relative Date: 3 years, 7 months ago Absolute Date: Fri 22 Jul 2022 10:46 Selected Answer: - Upvotes: 17

Classification
Multiclass
General (compact)

Comment 3

ID: 1712099 User: phvogel Badges: Most Recent Relative Date: 1 month ago Absolute Date: Thu 05 Feb 2026 21:31 Selected Answer: - Upvotes: 1

General (Compact) is the right choice for iOS: https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/export-your-model

Comment 4

ID: 1308720 User: jolimon Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Fri 08 Nov 2024 12:16 Selected Answer: - Upvotes: 2

I chose:
Classification
Multiclass
general (compact)

Comment 5

ID: 1248789 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 16 Jul 2024 11:08 Selected Answer: - Upvotes: 1

Classification
Multiclass
General (compact)

Comment 6

ID: 1235745 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 10:42 Selected Answer: - Upvotes: 1

classification
multiclass
General (compact): Since the model will be used in an iOS app, a compact model is preferred for performance and size reasons. The "General (compact)" domain is suitable for a wide range of image classification tasks and is optimized for mobile and edge devices.

Comment 7

ID: 1220343 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:26 Selected Answer: - Upvotes: 2

1. Classification
2. Multiclass
3. General (compact)

Comment 8

ID: 1220247 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:48 Selected Answer: - Upvotes: 1

1. Classification
2. Multiclass
3. General (compact)

Comment 9

ID: 1218337 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 15:13 Selected Answer: - Upvotes: 2

1. Classification
2. Multiclass
3. General (compact)

Comment 10

ID: 1182406 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 11:47 Selected Answer: - Upvotes: 1

Final Answer:
Classification
Multiclass
General (compact)

Comment 11

ID: 631608 User: AiEngineerS Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Fri 15 Jul 2022 06:14 Selected Answer: - Upvotes: 3

I also think that General(compact) https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model
1. It can be running offline
2. Real time locally

Comment 12

ID: 593654 User: SamedKia Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Thu 28 Apr 2022 09:53 Selected Answer: - Upvotes: 1

Correct

2. AI-102 Topic 4 Question 27

Sequence
4
Discussion ID
150443
Source URL
https://www.examtopics.com/discussions/microsoft/view/150443-exam-ai-102-topic-4-question-27-discussion/
Posted By
Christian_garcia_martin
Posted At
Oct. 29, 2024, 6:02 a.m.

Question

HOTSPOT
-

You have an Azure subscription.

You plan to build a solution that will analyze scanned documents and export relevant fields to a database.

You need to recommend an Azure AI Document Intelligence model for the following types of documents:

• Expenditure request authorization forms
• Structured and unstructured survey forms
• Structured employment application forms

The solution must minimize development effort and costs.

Which type of model should you recommend for each document type? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 5 comments Click to expand

Comment 1

ID: 1304293 User: Christian_garcia_martin Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 06:02 Selected Answer: - Upvotes: 8

Microsoft copilot:

For expenditure request authorization forms, you should choose the Prebuilt layout model. This model is designed to handle structured or semi-structured documents, making it suitable for forms with fields and values.

For structured employment application forms, you should choose the Custom template model. This model is well-suited for documents with a consistent and predictable layout, making it ideal for structured forms like employment applications.

For structured and unstructured survey forms, you should choose the Custom neural model. This model is designed to handle a variety of document types, including those with mixed structures, making it ideal for extracting data from both structured and unstructured survey forms.

Comment 1.1

ID: 1331053 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Tue 24 Dec 2024 09:05 Selected Answer: - Upvotes: 4

Based on the document types and the requirement to minimize development effort and costs, here are the recommendations:

Expenditure Request Authorization Forms:
Recommended Model: Prebuilt Invoice
Reason: This model is designed to handle structured financial documents and can be easily adapted for expenditure request forms, requiring minimal customization.

Structured and Unstructured Survey Forms:
Recommended Model: Prebuilt Layout
Reason: This model is ideal for documents with both structured and unstructured content, making it suitable for various survey forms without extensive training.

Structured Employment Application Forms:

Recommended Model: Custom Template
Reason: This model allows you to train the system with a few examples of your specific employment application form, making it efficient and cost-effective for documents with a consistent structure.

These choices should help streamline your document processing while keeping development effort and costs to a minimum.

Comment 1.1.1

ID: 1576879 User: osx Badges: - Relative Date: 9 months ago Absolute Date: Thu 12 Jun 2025 15:58 Selected Answer: - Upvotes: 1

it is not expenditure document

is the autorization form for the expenditure

Comment 1.1.2

ID: 1341322 User: BenALGuhl Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Thu 16 Jan 2025 03:23 Selected Answer: - Upvotes: 4

Copilot has no clue in this instance, I think. After being skeptical about your choices, I went in and tried around in Azure myself. Now I believe your answers are correct. For Expenditure, lots of fields in invoice pre-built model are already include, requiring only small tweaking.
For surveys, one example in the pre-built layout model is actually a survey-style form with checkboxes and hand-written content, which it handles perfectly.
And I also agree with the custom template for employment application forms, although arguably one could consider the contract pre-built model, which would have quite a few of the (assumed) required fields mapped already.

Comment 2

ID: 1718201 User: 5b6a2aa Badges: Most Recent Relative Date: 1 week, 3 days ago Absolute Date: Sun 01 Mar 2026 16:56 Selected Answer: - Upvotes: 1

Remember, the data needs to end up in a DB - structured, consistent.
None of the prebuilt model allow for this request.
a Custom model will allow this level of tuning.
custom Neural for the structured/unstructured surverys - and custom template for the others

3. AI-102 Topic 2 Question 10

Sequence
18
Discussion ID
75691
Source URL
https://www.examtopics.com/discussions/microsoft/view/75691-exam-ai-102-topic-2-question-10-discussion/
Posted By
g2000
Posted At
May 16, 2022, 3:39 p.m.

Question

You have an Azure Video Analyzer for Media (previously Video Indexer) service that is used to provide a search interface over company videos on your company's website.
You need to be able to search for videos based on who is present in the video.
What should you do?

  • A. Create a person model and associate the model to the videos.
  • B. Create person objects and provide face images for each object.
  • C. Invite the entire staff of the company to Video Indexer.
  • D. Edit the faces in the videos.
  • E. Upload names to a language model.

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 13 comments Click to expand

Comment 1

ID: 1712103 User: phvogel Badges: - Relative Date: 1 month ago Absolute Date: Thu 05 Feb 2026 21:40 Selected Answer: B Upvotes: 1

A and B are both necessary steps...but we're only allowed to pick one. Before you can create person objects (with up to ~250 images for each person object) you have to create the person model. However, the person model is created for you automatically -- it's not a separate step. Follow the guide here: https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/quickstarts-sdk/identity-client-library?tabs=linux%2Cvisual-studio&pivots=foundry-portal

Comment 2

ID: 1698994 User: rbh1090 Badges: - Relative Date: 3 months ago Absolute Date: Fri 12 Dec 2025 03:09 Selected Answer: B Upvotes: 1

Option A alone is incomplete. Creating a person model is a step, but you must also create person objects and upload face images so Video Indexer can match faces in videos to named people; that’s why option B is the correct choice.

Comment 3

ID: 1603607 User: supro Badges: - Relative Date: 6 months, 2 weeks ago Absolute Date: Thu 28 Aug 2025 17:22 Selected Answer: A Upvotes: 1

In Video Indexer, you create a Person Model.

You upload a few sample images of your CEO, HR manager, Sales lead, etc.

Video Indexer then matches those faces across all your videos.

Now, if you search "CEO," it instantly finds all 20 videos where the CEO appears.

Comment 4

ID: 1578310 User: StelSen Badges: - Relative Date: 8 months, 4 weeks ago Absolute Date: Tue 17 Jun 2025 14:37 Selected Answer: B Upvotes: 3

B. Create person objects and provide face images for each object
Why B is the best choice:
Person objects and their face images are what enable face recognition and video search in practice.

Without creating person objects and supplying reference face images, the person model (Option A) remains empty and cannot recognize anyone.

Azure Video Indexer automatically creates a default person model if you don’t create a custom one—so the system can still function with just person objects and faces

Why not A?
Structural container (optional if you use the default).

So, while Option A sets the structure, Option B does the real work of enabling searchable face-based indexing in videos. Choose B as the best single answer

Comment 5

ID: 1235177 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:17 Selected Answer: A Upvotes: 1

I say this answer is A.

Comment 6

ID: 1218335 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 15:10 Selected Answer: A Upvotes: 2

A is right.

Comment 7

ID: 1049172 User: propanther Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Sat 21 Oct 2023 00:56 Selected Answer: A Upvotes: 3

You can use a Person model to index your new video by assigning the Person model during the upload of the video.

You can use a Person model to index your new video by assigning the Person model during the upload of the video.

https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-person-model-with-website

Comment 8

ID: 1028486 User: sl_mslconsulting Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Mon 09 Oct 2023 06:11 Selected Answer: B Upvotes: 1

Customers can build custom Person models and enable Azure AI Video Indexer to recognize faces that aren't recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face. You can use the default model if you like but the point is to add persons and their associated image files to the model : https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-person-model-with-website.

Comment 8.1

ID: 1316680 User: nastolgia Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 23 Nov 2024 14:55 Selected Answer: - Upvotes: 1

you not right. at the begin you need to create a person model and after that you able to create person objects

Comment 8.2

ID: 1028489 User: sl_mslconsulting Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Mon 09 Oct 2023 06:18 Selected Answer: - Upvotes: 1

Azure AI Video Indexer can detect occurrences of this person in the future videos that you index and the current videos that you had already indexed, using the Person model to which you added this new face - you need to tell it what it should be comparing to when looking for a given face.

Comment 8.3

ID: 1028496 User: sl_mslconsulting Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Mon 09 Oct 2023 06:36 Selected Answer: - Upvotes: 1

Also If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer will use the default Person model in your account. one more reason not to choose A.

Comment 9

ID: 652384 User: Nebary Badges: - Relative Date: 3 years, 6 months ago Absolute Date: Sat 27 Aug 2022 01:58 Selected Answer: A Upvotes: 4

Should be A

Comment 10

ID: 602604 User: g2000 Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Mon 16 May 2022 15:39 Selected Answer: - Upvotes: 1

seems right
https://docs.microsoft.com/en-us/azure/azure-video-indexer/customize-person-model-with-website

4. AI-102 Topic 4 Question 16

Sequence
22
Discussion ID
136922
Source URL
https://www.examtopics.com/discussions/microsoft/view/136922-exam-ai-102-topic-4-question-16-discussion/
Posted By
Murtuza
Posted At
March 22, 2024, 1:49 p.m.

Question

HOTSPOT
-

You plan to provision Azure Cognitive Services resources by using the following method.

You need to create a Standard tier resource that will convert scanned receipts into text.

image

How should you call the method? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 10 comments Click to expand

Comment 1

ID: 1218683 User: reiwanotora Badges: Highly Voted Relative Date: 1 year, 9 months ago Absolute Date: Sun 26 May 2024 04:26 Selected Answer: - Upvotes: 15

The current name for Form Recognizer (as of May 2024) is Document Intelligence.

1. Document Intelligence(Form Recognizer)
2. "S0", "eastus"

https://azure.microsoft.com/en-us/pricing/details/ai-document-intelligence/

Comment 2

ID: 1284180 User: famco Badges: Most Recent Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:08 Selected Answer: - Upvotes: 3

Is it asking me to know it is eastus and not useast?? In this era this is important knowledge for Microsoft?
What does knowing these things prove? Microsoft should do an internal audit and shutdown in shame

Comment 2.1

ID: 1707823 User: Aunehwet79 Badges: - Relative Date: 1 month, 3 weeks ago Absolute Date: Sun 18 Jan 2026 18:55 Selected Answer: - Upvotes: 1

There is no such Azure region as useast. It has always been eastus

Comment 3

ID: 1273121 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 05:58 Selected Answer: - Upvotes: 2

To be honest if you're a developer you notice one parameter missing (name). So the code is not valid. But for the sake of answer we go with tier before location,

Comment 4

ID: 1264989 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 07:04 Selected Answer: - Upvotes: 2

1. Document Intelligence (Form Recognizer)
2. "S0", "eastus"

Comment 5

ID: 1197147 User: sergbs Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 17 Apr 2024 12:09 Selected Answer: - Upvotes: 3

1. FormRecognizer -> Document Intelligences service.
2. S0, "eastus"

Comment 6

ID: 1186859 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 31 Mar 2024 17:10 Selected Answer: - Upvotes: 1

Forms recognizer comes in 2 tiers one is F0 and another is S0. Apologizes on listing out S1 Ties as it doesnt exists with Document Intelligences service. Sorry for the confusion

Comment 7

ID: 1180206 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 18:04 Selected Answer: - Upvotes: 1

Tier: “S1”
Location: “eastus”

Comment 8

ID: 1180205 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 18:02 Selected Answer: - Upvotes: 1

To create a Standard tier resource that converts scanned receipts into text using Form Recognizer, you should call the provision_resource method with the following parameters:

Resource Name: “res1”
Kind: FormRecognizer

Comment 9

ID: 1180055 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 13:49 Selected Answer: - Upvotes: 2

Document Intelligence (formerly known as Form Recognizer) is an Azure AI service that specializes in extracting structured data from unstructured documents, including receipts.

5. AI-102 Topic 1 Question 37

Sequence
23
Discussion ID
82105
Source URL
https://www.examtopics.com/discussions/microsoft/view/82105-exam-ai-102-topic-1-question-37-discussion/
Posted By
momentumhd
Posted At
Sept. 14, 2022, 10:06 a.m.

Question

You have a factory that produces food products.
You need to build a monitoring solution for staff compliance with personal protective equipment (PPE) requirements. The solution must meet the following requirements:
* Identify staff who have removed masks or safety glasses.
* Perform a compliance check every 15 minutes.
* Minimize development effort.
* Minimize costs.
Which service should you use?

  • A. Face
  • B. Computer Vision
  • C. Azure Video Analyzer for Media (formerly Video Indexer)

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 18 comments Click to expand

Comment 1

ID: 673730 User: Davard Badges: Highly Voted Relative Date: 3 years, 5 months ago Absolute Date: Tue 20 Sep 2022 03:26 Selected Answer: - Upvotes: 20

A. Face. The solution link explains:

Embed facial recognition into your apps for a seamless and highly secured user experience. No machine-learning expertise is required. Features include face detection that perceives facial features and attributes—such as a face mask, glasses, or face location—in an image, and identification of a person by a match to your private repository or via photo ID.

Comment 1.1

ID: 1706417 User: Mo2010 Badges: - Relative Date: 1 month, 3 weeks ago Absolute Date: Tue 13 Jan 2026 15:51 Selected Answer: - Upvotes: 1

If the monitoring solution uses images captured every 15 minutes,
A. Face is the best choice.
If the solution requires real-time video analysis, then C. Azure Video Analyzer is better.

Comment 2

ID: 1705418 User: Mo2010 Badges: Most Recent Relative Date: 2 months ago Absolute Date: Fri 09 Jan 2026 19:25 Selected Answer: B Upvotes: 1

Why Computer Vision is the best choice
Azure Computer Vision provides prebuilt image analysis capabilities, including:

✅ Detecting people and objects
✅ Identifying whether a face is wearing a mask
✅ Image-based analysis (snapshots every 15 minutes fit perfectly)
✅ Simple REST API → minimal development effort
✅ Lower cost compared to video analytics services

You can periodically capture still images from cameras and analyze them for PPE compliance, which is much cheaper and simpler than processing continuous video.

Why not face ?
Focuses on face detection and recognition
Limited PPE detection (primarily face attributes)
Not designed for general safety gear detection
More restrictive and higher compliance/privacy concerns

Comment 3

ID: 1700485 User: SushilJinder Badges: - Relative Date: 2 months, 3 weeks ago Absolute Date: Fri 19 Dec 2025 13:48 Selected Answer: B Upvotes: 2

asked chatGpt and its suggest computer vision too

Comment 4

ID: 1624955 User: Leunis Badges: - Relative Date: 4 months ago Absolute Date: Tue 11 Nov 2025 05:34 Selected Answer: A Upvotes: 1

If it weren't for the "Identify staff .." requirement it would be B. Because it needs to identify staff who removed PPE, its A.

Comment 5

ID: 1608391 User: microbricks Badges: - Relative Date: 6 months ago Absolute Date: Fri 12 Sep 2025 04:58 Selected Answer: B Upvotes: 1

Focuses on facial recognition and emotion detection—not designed for detecting PPE like masks or glasses.

Comment 6

ID: 1578822 User: sema2232 Badges: - Relative Date: 8 months, 3 weeks ago Absolute Date: Thu 19 Jun 2025 08:27 Selected Answer: B Upvotes: 2

B is the right one

Comment 7

ID: 1551289 User: gil906 Badges: - Relative Date: 11 months, 1 week ago Absolute Date: Sun 06 Apr 2025 00:06 Selected Answer: B Upvotes: 1

Computer Vision is the best choice for your needs:
Why: It offers a flexible, cost-effective solution for detecting PPE compliance (masks and safety glasses) using custom-trained models. Its API-driven approach minimizes development effort, and it can be integrated into a system that processes video or image snapshots every 15 minutes. Costs are manageable since you’re only analyzing periodic snapshots rather than continuous video streams.

Comment 8

ID: 1395728 User: WorldTraveller Badges: - Relative Date: 12 months ago Absolute Date: Fri 14 Mar 2025 23:37 Selected Answer: B Upvotes: 2

Yes, Azure Computer Vision can be used to monitor staff compliance with personal protective equipment (PPE) requirements by detecting whether employees are wearing the necessary safety gear, such as helmets and masks, in real-time. This technology helps ensure workplace safety and compliance with safety standards.

Comment 9

ID: 1387374 User: tejas29 Badges: - Relative Date: 1 year ago Absolute Date: Tue 11 Mar 2025 12:58 Selected Answer: C Upvotes: 1

For monitoring staff compliance with PPE requirements in your factory, Azure Video Analyzer for Media (formerly Video Indexer) is the most suitable option. This service can identify staff who have removed masks or safety glasses, perform compliance checks every 15 minutes, and minimize both development effort and costs

Comment 10

ID: 1361552 User: user1024 Badges: - Relative Date: 1 year ago Absolute Date: Tue 25 Feb 2025 20:13 Selected Answer: C Upvotes: 1

Video indexer is the correct answer as it has built-in mask detection.

https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/intro-to-spatial-analysis-public-preview?tabs=sa#social-distancing-and-face-mask-detection

Comment 11

ID: 1358305 User: ceris22962 Badges: - Relative Date: 1 year ago Absolute Date: Tue 18 Feb 2025 14:48 Selected Answer: A Upvotes: 1

Face meets the requirements of identifying staff and checking PPE compliance effectively while keeping both development effort and costs low.

Comment 12

ID: 1357228 User: gyaansastra Badges: - Relative Date: 1 year ago Absolute Date: Sun 16 Feb 2025 13:06 Selected Answer: B Upvotes: 1

To meet the requirements of identifying staff who have removed masks or safety glasses, performing a compliance check every 15 minutes, and minimizing both development effort and costs, you should use:

B. Computer Vision

Azure Computer Vision can analyze images and videos to detect whether staff are wearing masks and safety glasses. It can be set up to perform checks at regular intervals, meeting the compliance requirement.

For more information, you can refer to the official - https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview

Comment 13

ID: 1339099 User: hannah380099 Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sat 11 Jan 2025 10:11 Selected Answer: B Upvotes: 2

The Face API does indeed detect human faces and can return various attributes, which is essential for applications where face recognition or verification is needed.

However, in the specific context of monitoring PPE compliance, while face detection is a crucial step, it might not be the most cost-effective and efficient choice compared to using the Computer Vision API directly, especially if the primary task is identifying PPE like masks and safety glasses. The Computer Vision API can perform both object detection (including PPE) and OCR, which might streamline the process with minimal development effort and cost.

Comment 14

ID: 1319436 User: Alan_CA Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 21:08 Selected Answer: A Upvotes: 1

Because you have to IDENTIFY staff, it is definitely Face.

Comment 15

ID: 1315180 User: joeakim Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 20 Nov 2024 11:25 Selected Answer: A Upvotes: 2

"A" makes sense. The task is to recognize people, that DO NOT wear their equipment, not to see who is wearing.
Why check for social distancing, identifying masks, goggles, etc. if we only need to find the people that are not wearing that stuff and therefore are easily detected by Face.

Comment 16

ID: 1308668 User: AL_everyday Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Fri 08 Nov 2024 09:00 Selected Answer: B Upvotes: 3

Copilot: B

To build a monitoring solution for staff compliance with PPE requirements, you should use B. Computer Vision. This service can analyze images and videos to detect whether staff are wearing masks or safety glasses. It is cost-effective and requires minimal development effort compared to other options

Comment 17

ID: 1306879 User: vivekgoel15 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 04 Nov 2024 11:13 Selected Answer: - Upvotes: 1

C is the right option

6. AI-102 Topic 2 Question 36

Sequence
25
Discussion ID
134936
Source URL
https://www.examtopics.com/discussions/microsoft/view/134936-exam-ai-102-topic-2-question-36-discussion/
Posted By
audlindr
Posted At
Feb. 29, 2024, 7:28 p.m.

Question

You are building an app that will use the Azure AI Video Indexer service.

You plan to train a language model to recognize industry-specific terms.

You need to upload a file that contains the industry-specific terms.

Which file format should you use?

  • A. XML
  • B. TXT
  • C. XLS
  • D. PDF

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 1162907 User: audlindr Badges: Highly Voted Relative Date: 1 year, 6 months ago Absolute Date: Thu 29 Aug 2024 18:28 Selected Answer: B Upvotes: 10

Answer is correct:
As per https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-language-model-with-website

This step creates the model and gives the option to upload text files to the model.

Comment 2

ID: 1706359 User: BAWS Badges: Most Recent Relative Date: 1 month, 4 weeks ago Absolute Date: Tue 13 Jan 2026 08:56 Selected Answer: B Upvotes: 1

B is the correct answer

Comment 3

ID: 1342386 User: Architect_CTO Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Sat 18 Jan 2025 01:56 Selected Answer: B Upvotes: 1

Upload text file(s) to train the language model. The file can either contain a list of words as you would like them to appear in the Video Indexer transcript or the relevant words included naturally in sentences and paragraphs. As better results are achieved with the latter approach, it's recommended for the upload file to contain full sentences or paragraphs related to your content.

Comment 4

ID: 1218245 User: takaimomoGcup Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Mon 25 Nov 2024 14:24 Selected Answer: B Upvotes: 1

B is correct.

Comment 5

ID: 1213817 User: reiwanotora Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Tue 19 Nov 2024 16:41 Selected Answer: B Upvotes: 1

B is right.

Comment 6

ID: 1212611 User: pepe54362 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 17 Nov 2024 01:50 Selected Answer: B Upvotes: 1

I agree, B is the option

7. AI-102 Topic 1 Question 11

Sequence
26
Discussion ID
75686
Source URL
https://www.examtopics.com/discussions/microsoft/view/75686-exam-ai-102-topic-1-question-11-discussion/
Posted By
ovtchinnikov
Posted At
May 16, 2022, 12:09 p.m.

Question

You build a custom Form Recognizer model.
You receive sample files to use for training the model as shown in the following table.
image
Which three files can you use to train the model? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

  • A. File1
  • B. File2
  • C. File3
  • D. File4
  • E. File5
  • F. File6

Suggested Answer

ACF

Answer Description Click to expand


Community Answer Votes

Comments 20 comments Click to expand

Comment 1

ID: 631367 User: Eltooth Badges: Highly Voted Relative Date: 3 years, 8 months ago Absolute Date: Thu 14 Jul 2022 14:51 Selected Answer: ACF Upvotes: 20

File 2 and 5 are excluded.

New service limits now goes up to 500MB so...
File 1, 3, and 6 are correct for "training the model", however if MSFT remove the word "training" from the question - be careful.

https://docs.microsoft.com/en-gb/learn/modules/work-form-recognizer/3-get-started
https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/service-limits?tabs=v21

Comment 2

ID: 979147 User: james2033 Badges: Highly Voted Relative Date: 2 years, 7 months ago Absolute Date: Sat 12 Aug 2023 07:38 Selected Answer: CDF Upvotes: 7

Correct answer: ACDF
https://learn.microsoft.com/en-gb/training/modules/work-form-recognizer/3-get-started

Understand Form Recognizer file input requirements
Form Recognizer works on input documents that meet these requirements:
- Format must be JPG, PNG, BMP, PDF (text or scanned), or TIFF.
- The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
- Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
- The total size of the training data set must be 500 pages or less.

Comment 3

ID: 1705412 User: Mo2010 Badges: Most Recent Relative Date: 2 months ago Absolute Date: Fri 09 Jan 2026 18:48 Selected Answer: ACF Upvotes: 4

File1 PDF 20 MB✅ YesSupported format and under 50 MB
File2 MP4 100 MB❌ NoVideo files are not supported
File3 JPG 20 MB✅ YesSupported format and under 50 MB
File4 PDF 100 MB❌ NoExceeds 50 MB size limit
File5 GIF 1MB❌ NoGIF is not a supported training format
File6 JPG 40 MB✅ YesSupported format and under 50 MB

Comment 4

ID: 1622515 User: ArnabRoy15 Badges: - Relative Date: 4 months, 1 week ago Absolute Date: Mon 03 Nov 2025 11:32 Selected Answer: ACD Upvotes: 1

Form Recognizer works on input documents that meet these requirements:
- Format must be JPG, PNG, BMP, PDF (text or scanned), or TIFF.
- The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
- Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
- The total size of the training data set must be 500 pages or less.

Comment 5

ID: 1332449 User: Architect_CTO Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Fri 27 Dec 2024 16:02 Selected Answer: ACF Upvotes: 3

ACF is correct, for training the maximum limit is 50MB, for production the limit can go upto 500MB

Comment 6

ID: 1332229 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Fri 27 Dec 2024 06:01 Selected Answer: ACF Upvotes: 3

Form Recognizer model was removed from AI-102 exam

Comment 7

ID: 1272417 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Mon 26 Aug 2024 07:01 Selected Answer: ACF Upvotes: 1

1 3 and 6

Comment 8

ID: 1217627 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:58 Selected Answer: ACF Upvotes: 1

1 3 6 YES!

Comment 9

ID: 1077464 User: RupRizal Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Wed 22 Nov 2023 16:01 Selected Answer: - Upvotes: 6

For custom model training the total size is still 50MB. Answer is correct

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept-custom-classifier?view=doc-intel-4.0.0

Comment 10

ID: 1058605 User: Tazmania98 Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Tue 31 Oct 2023 09:59 Selected Answer: - Upvotes: 1

the question precises custom model ---- size limit 50MB
see this link https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/service-limits?view=doc-intel-3.1.0
Correct answers : ACF

Comment 11

ID: 995779 User: kiro_kocha Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Fri 01 Sep 2023 09:03 Selected Answer: ACF Upvotes: 1

I think A,C,F are correct

Comment 12

ID: 969801 User: acsoma Badges: - Relative Date: 2 years, 7 months ago Absolute Date: Wed 02 Aug 2023 08:50 Selected Answer: - Upvotes: 3

appeared in august exam

Comment 13

ID: 950974 User: james2033 Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Thu 13 Jul 2023 22:13 Selected Answer: ACF Upvotes: 2

Based on filetype: PDF, JPG; and less than 50 MB of file size.

Comment 14

ID: 839668 User: RAN_L Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Wed 15 Mar 2023 09:18 Selected Answer: ACF Upvotes: 1

The supported file types for training a Form Recognizer model are PDF, PNG, and JPEG. Therefore, you can use only the files that have one of these formats for training the model. Based on the information provided, the files that can be used for training the model are:

A. File1 (PDF)
C. File3 (JPG)
F. File6 (JPG)

Therefore, the correct answers are:

A. File1
C. File3
F. File6

Comment 15

ID: 631982 User: AusAv Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Sat 16 Jul 2022 05:17 Selected Answer: - Upvotes: 2

It should allow all except the GIF and MP4 on the paid tier, free tier supports file up to 4MB: https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/concept-read#input-requirements

Comment 15.1

ID: 683280 User: Tickxit Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Fri 30 Sep 2022 08:38 Selected Answer: - Upvotes: 1

“For custom model training, the total size of training data is 50 MB for template model and 1G-MB for the neural model.”
“The file size for analyzing documents must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.” -> But question talks about training the model, so I think the 50MB applies, File 4 can't be used.

Comment 16

ID: 602944 User: DingDongSingSong Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Tue 17 May 2022 15:35 Selected Answer: - Upvotes: 3

Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). So there is more than 3 correct responses. PDF and JPG files (2 each, therefore 4 possible responses even though question asks for 3)

Comment 16.1

ID: 628667 User: fux Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Fri 08 Jul 2022 09:16 Selected Answer: - Upvotes: 5

The PDF must be capped at 50mb, so in reality, there are only 3 answers

Comment 16.1.1

ID: 979150 User: james2033 Badges: - Relative Date: 2 years, 7 months ago Absolute Date: Sat 12 Aug 2023 07:40 Selected Answer: - Upvotes: 2

No, PDF size is limit 500 MB now, see https://learn.microsoft.com/en-gb/training/modules/work-form-recognizer/3-get-started .

Comment 17

ID: 602530 User: ovtchinnikov Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Mon 16 May 2022 12:09 Selected Answer: - Upvotes: 1

1 & 3 correct.
But GIFs are also allowed, if I remember right.

8. AI-102 Topic 2 Question 1

Sequence
29
Discussion ID
54796
Source URL
https://www.examtopics.com/discussions/microsoft/view/54796-exam-ai-102-topic-2-question-1-discussion/
Posted By
motu
Posted At
June 7, 2021, 1:15 p.m.

Question

HOTSPOT -
You are developing an application that will use the Computer Vision client library. The application has the following code.
image
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand


Comments 19 comments Click to expand

Comment 1

ID: 376716 User: motu Badges: Highly Voted Relative Date: 4 years, 9 months ago Absolute Date: Mon 07 Jun 2021 13:15 Selected Answer: - Upvotes: 103

Box 3 is Yes, the stream will be generated from a local image!

Comment 1.1

ID: 1264563 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 12 Aug 2024 11:00 Selected Answer: - Upvotes: 6

Agree with this man

Comment 1.2

ID: 1703354 User: obidiya22 Badges: - Relative Date: 2 months, 1 week ago Absolute Date: Sat 03 Jan 2026 00:51 Selected Answer: - Upvotes: 1

corrected, its Yes

Comment 2

ID: 398271 User: Adedoyin_Simeon Badges: Highly Voted Relative Date: 4 years, 8 months ago Absolute Date: Sun 04 Jul 2021 13:38 Selected Answer: - Upvotes: 39

Box 3 should be Yes, a stream is only a pathway for data. and in this case the data actually comes from a local file. The correct answer would be No, Yes, Yes.

Comment 3

ID: 1409504 User: MiraalfasaS Badges: Most Recent Relative Date: 11 months, 3 weeks ago Absolute Date: Mon 24 Mar 2025 02:44 Selected Answer: - Upvotes: 2

No, Yes, Yes

Comment 4

ID: 1355506 User: helg Badges: - Relative Date: 1 year ago Absolute Date: Wed 12 Feb 2025 08:22 Selected Answer: - Upvotes: 1

The code will perform face recognition. No
The code will list tags and their associated confidence. Yes
The code will read a file from the local file system. Yes

Comment 5

ID: 1276149 User: MostafaAbdellahAhmed Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 17:27 Selected Answer: - Upvotes: 2

1. **The code will perform face recognition.**
**Answer:** No
The code uses the `ComputerVisionClient` to analyze an image for visual features such as `Description` and `Tags`, not for face recognition.

2. **The code will list tags and their associated confidence.**
**Answer:** Yes
The code retrieves tags and their confidence scores from the `results.Tags` property and outputs them.

3. **The code will read a file from the local file system.**
**Answer:** Yes
The code uses `File.OpenRead(localImage)` to read an image file from the local file system.

Comment 6

ID: 1239527 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 30 Jun 2024 07:28 Selected Answer: - Upvotes: 1

N - Y - Y

Comment 7

ID: 1230920 User: gary_cooper Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 15 Jun 2024 13:55 Selected Answer: - Upvotes: 7

The code will perform face recognition --> No
The code will list tags and their associated confidence --> Yes
The code will read a file from the local file system --> Yes

Comment 8

ID: 1225564 User: ARM360 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 16:11 Selected Answer: - Upvotes: 3

The code will perform face recognition.
No. The code is set to analyze image descriptions and tags (VisualFeatureTypes.Description, VisualFeatureTypes.Tags), but it does not include face recognition.


The code will list tags and their associated confidence.
Yes. The code includes a loop that iterates through results.Tags and prints each tag's name and confidence.

The code will read a file from the local file system.
Yes. The code uses File.OpenRead(localImage) to open and read an image file from the local file system.

So, the answers are:

The code will perform face recognition. No
The code will list tags and their associated confidence. Yes
The code will read a file from the local file system. Yes

Comment 9

ID: 1225416 User: p2006 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 13:02 Selected Answer: - Upvotes: 1

https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library?tabs=windows%2Cvisual-studio&pivots=programming-language-csharp#analyze-image

Comment 10

ID: 1220253 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:53 Selected Answer: - Upvotes: 2

No
Yes
Yes

Comment 11

ID: 1205030 User: anntv252 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 01 May 2024 13:04 Selected Answer: - Upvotes: 3

No Yes Yes
Image analysis
Return the category and confident score
Read local video streaming or streaming source

Comment 12

ID: 1194126 User: michaelmorar Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 12 Apr 2024 05:58 Selected Answer: - Upvotes: 1

The fact that the parameter is named 'local', might be misleading - it could be anything.
HOWEVER, I'm not a C# expert, but File.OpenRead tells us that the file is on a filesystem to which the device has access.
SO my vote is NYY

Comment 13

ID: 1182383 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 11:21 Selected Answer: - Upvotes: 1

Final Answer:
N Y Y

Comment 14

ID: 645911 User: ninjia Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Fri 12 Aug 2022 15:30 Selected Answer: - Upvotes: 6

Box 1: No. The code generates description and tags. See line 3,4
Box 2: Yes. The code displays tag.Name and tag.Confidence
Box 3: Yes. File.OpenRead reads a local file. See https://docs.microsoft.com/en-us/dotnet/api/system.io.file.openread?view=net-6.0

Comment 15

ID: 632010 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Sat 16 Jul 2022 06:43 Selected Answer: - Upvotes: 2

No
Yes
Yes

Comment 16

ID: 622145 User: mohamedba Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Sat 25 Jun 2022 15:43 Selected Answer: - Upvotes: 3

Answer is NYY

Comment 17

ID: 590819 User: gursimran_s Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Sun 24 Apr 2022 00:36 Selected Answer: - Upvotes: 1

https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision.analyzeimageinstreamasync?view=azure-java-legacy

9. AI-102 Topic 2 Question 8

Sequence
32
Discussion ID
74355
Source URL
https://www.examtopics.com/discussions/microsoft/view/74355-exam-ai-102-topic-2-question-8-discussion/
Posted By
kiassi1998
Posted At
April 24, 2022, 7:14 p.m.

Question

DRAG DROP -
You are developing an application that will recognize faults in components produced on a factory production line. The components are specific to your business.
You need to use the Custom Vision API to help detect common faults.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand

Step 1: Create a project -
Create a new project.
Step 2: Upload and tag the images
Choose training images. Then upload and tag the images.
Step 3: Train the classifier model.

Train the classifier -
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier

Comments 20 comments Click to expand

Comment 1

ID: 806178 User: arbest Badges: Highly Voted Relative Date: 3 years ago Absolute Date: Sun 12 Feb 2023 11:00 Selected Answer: - Upvotes: 24

The anwser is correct
Create a project
Upload and tag images
Train a classifier model
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/quickstarts/image-classification?tabs=visual-studio&pivots=programming-language-csharp
For Type of model
https://azure.microsoft.com/en-us/use-cases/defect-detection-with-image-analysis/

Comment 2

ID: 1703113 User: rbh1090 Badges: Most Recent Relative Date: 2 months, 1 week ago Absolute Date: Fri 02 Jan 2026 02:35 Selected Answer: - Upvotes: 1

I think the last box should be "Train the object detection model".

Object detection is better suited for quality control applications in manufacturing because it provides both the identification of the fault and its precise location.

1) Pinpointing defects: in a factory setting, knowing where the fault is on the component is critical for automated systems to discard or repair the part. Classification only tells you the part is 'defective', not the exact spot of the defect.
2) Multiple faults: a single component might have multiple, different faults. Object detection can identify and label each specific fault individually within the same image.
3) Actionable data: the bounding box output provides actionable data for downstream automation systems, like robotic arms programmed to interact with the component based on specific coordinates.

Comment 3

ID: 1410979 User: massnonn Badges: - Relative Date: 11 months, 2 weeks ago Absolute Date: Thu 27 Mar 2025 19:20 Selected Answer: - Upvotes: 2

why not object detection?

Comment 4

ID: 1297451 User: 4371883 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 14 Oct 2024 12:23 Selected Answer: - Upvotes: 4

Got a similar one in Oct 2024, some wording change from memory.

Comment 5

ID: 1291909 User: ImmoSH Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 01 Oct 2024 12:32 Selected Answer: - Upvotes: 2

Answer is correct, it is a classifier since custom vision only builds an image classifier, detection is part of normal vision.

Comment 6

ID: 1236635 User: JacobZ Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 25 Jun 2024 01:03 Selected Answer: - Upvotes: 2

Got this in the exam, Jun 2024.

Comment 7

ID: 1220248 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:48 Selected Answer: - Upvotes: 3

1. Create a project
2. Upload and tag images
3. Train a classifier model

Comment 8

ID: 1218340 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 15:16 Selected Answer: - Upvotes: 3

1. Create a project
2. Upload and tag images
3. Train a classifier model

Comment 9

ID: 1182404 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 11:46 Selected Answer: - Upvotes: 1

Final Answer:

Create a project
Upload and tag images
Train a classifier model

Comment 10

ID: 1179857 User: Ody Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 05:58 Selected Answer: - Upvotes: 1

I think the key is recognizing it's a production line. That means the same component is coming down the line.

Then there are other lines producing other components.

Classification would be best suited for that scenario. If we were looking at an image that might have many different components in one image and want to find the location of different faults, then object would be more appropriate.

Comment 11

ID: 1131641 User: suzanne_exam Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Thu 25 Jan 2024 13:19 Selected Answer: - Upvotes: 1

It's a classifier model because it's not detecting whether they objects are there or not, it's classifying them as faulty or not

Comment 12

ID: 1028451 User: sl_mslconsulting Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Mon 09 Oct 2023 05:21 Selected Answer: - Upvotes: 2

I would pick object detection. Custom Vision functionality can be divided into two features. Image classification applies one or more labels to an entire image. Object detection is similar, but it returns the coordinates in the image where the applied label(s) can be found. If I were the users, I would certainly want know where the faults are located to make sure and have a second look and it’s quite useless by just telling me there are something wrong but can’t tell you where they are as I need to do extra work to find them!

Comment 13

ID: 806170 User: NNU Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Feb 2023 10:49 Selected Answer: - Upvotes: 3

Yes the anwser is correct
Create a project
Upload and tags images
Train the classifier model
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/quickstarts/image-classification?tabs=visual-studio&pivots=programming-language-csharp
and for type of model
https://azure.microsoft.com/en-us/use-cases/defect-detection-with-image-analysis/

Comment 14

ID: 759454 User: Adedoyin_Simeon Badges: - Relative Date: 3 years, 2 months ago Absolute Date: Wed 28 Dec 2022 07:49 Selected Answer: - Upvotes: 2

Correct answer should be:
Create
Upload & Tag
Train the object detection model

The question was to help "detect" common faults. Detection means where the fault actually is in the image.

Comment 14.1

ID: 765722 User: cce1 Badges: - Relative Date: 3 years, 2 months ago Absolute Date: Wed 04 Jan 2023 15:03 Selected Answer: - Upvotes: 5

Nope, answer should be
Create, Upload & Tag, and Train classifier (not a detection mode)
Bcz classifier has to classify whether the given component is faulty or not...

Comment 15

ID: 632809 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 05:50 Selected Answer: - Upvotes: 3

Create
Upload
Train the classifier

Comment 16

ID: 631291 User: ppo12 Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Thu 14 Jul 2022 12:23 Selected Answer: - Upvotes: 2

Quite confusing on the questions, since Object Detection technically can be correct IMO

Comment 16.1

ID: 669064 User: momentumhd Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Wed 14 Sep 2022 15:42 Selected Answer: - Upvotes: 1

You don't tag the detection images so by exclusion you could direct the answer to classification

Comment 17

ID: 591176 User: kiassi1998 Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Sun 24 Apr 2022 19:14 Selected Answer: - Upvotes: 4

Correct

Comment 17.1

ID: 620189 User: sdokmak Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Wed 22 Jun 2022 07:43 Selected Answer: - Upvotes: 4

Agreed. Train the classifier, not object detection model because they make no mention of need to know the location of the detections, but they do mention detecting common faults. So can either classify as faulty, not faulty, or also classify different fault types.. not clear on that one but the answer is correct.

10. AI-102 Topic 4 Question 33

Sequence
33
Discussion ID
149785
Source URL
https://www.examtopics.com/discussions/microsoft/view/149785-exam-ai-102-topic-4-question-33-discussion/
Posted By
cheetah313
Posted At
Oct. 19, 2024, 11:50 a.m.

Question

You have an Azure subscription that contains an Azure AI Document Intelligence resource named AIdoc1.

You have an app named App1 that uses AIdoc1. App1 analyzes business cards by calling business card model v2.1.

You need to update App1 to ensure that the app can interpret QR codes. The solution must minimize administrative effort.

What should you do first?

  • A. Upgrade the business card model to v3.0.
  • B. Implement the read model.
  • C. Deploy a custom model.
  • D. Implement the contract model.

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 18 comments Click to expand

Comment 1

ID: 1299975 User: cheetah313 Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Sat 19 Oct 2024 11:50 Selected Answer: A Upvotes: 7

Correct according to ChatGPT.

Comment 2

ID: 1701525 User: 19b3ff8 Badges: Most Recent Relative Date: 2 months, 2 weeks ago Absolute Date: Wed 24 Dec 2025 13:05 Selected Answer: A Upvotes: 1

https://learn.microsoft.com/ja-jp/azure/ai-services/document-intelligence/prebuilt/business-card?view=doc-intel-4.0.0

Comment 3

ID: 1670153 User: bedic Badges: - Relative Date: 3 months, 1 week ago Absolute Date: Fri 05 Dec 2025 14:08 Selected Answer: B Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/model-overview?view=doc-intel-4.0.0#model-analysis-features

Comment 3.1

ID: 1670154 User: bedic Badges: - Relative Date: 3 months, 1 week ago Absolute Date: Fri 05 Dec 2025 14:10 Selected Answer: - Upvotes: 1

A cannot be the correct answer.
The following add-on capabilities are available for Document Intelligence. For all models except the business card model, Document Intelligence now supports add-on capabilities to allow for more sophisticated analysis.
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/model-overview?view=doc-intel-4.0.0#add-on-capability

Comment 4

ID: 1609046 User: test1025 Badges: - Relative Date: 5 months, 4 weeks ago Absolute Date: Sun 14 Sep 2025 19:19 Selected Answer: A Upvotes: 3

Correct answer is Correct Answer: A. Upgrade the business card model to v3.0 as per Copilot, this is the explanation given. The business card model v2.1 does not support QR code extraction.
Starting from v3.0, the prebuilt-businessCard model includes QR code support, allowing it to extract data encoded in QR codes on business cards.
Upgrading to v3.0 or v3.1 is a simple change that does not require retraining or custom model development, making it the least effort solution.

Comment 5

ID: 1608345 User: bannamk Badges: - Relative Date: 6 months ago Absolute Date: Thu 11 Sep 2025 20:29 Selected Answer: A Upvotes: 1

Correct answer is Correct Answer: A. Upgrade the business card model to v3.0 as per Copilot, this is the explanation given. The business card model v2.1 does not support QR code extraction.
Starting from v3.0, the prebuilt-businessCard model includes QR code support, allowing it to extract data encoded in QR codes on business cards.
Upgrading to v3.0 or v3.1 is a simple change that does not require retraining or custom model development, making it the least effort solution.

Comment 6

ID: 1565438 User: marcellov Badges: - Relative Date: 10 months, 2 weeks ago Absolute Date: Thu 01 May 2025 14:56 Selected Answer: A Upvotes: 1

From AI:

To enable QR code interpretation with minimal administrative effort, the correct first step is A. Upgrade the business card model to v3.0.

Key rationale:
Prebuilt models like the business card model in v3.0 (or later) support add-on capabilities such as barcode/QR code extraction via the features=barcodes parameter.
The v2.1 business card model (likely referenced here) predates add-on features and lacks native QR code support without custom code.
Option B (read model) extracts text but does not inherently process QR codes.
Option C (custom model) requires training and introduces unnecessary complexity for QR code extraction.
Option D (contract model) is unrelated to QR codes or business cards.

Comment 7

ID: 1564549 User: tech_rum Badges: - Relative Date: 10 months, 2 weeks ago Absolute Date: Tue 29 Apr 2025 01:39 Selected Answer: A Upvotes: 1

Correct answer is Option A

Can technically read QR codes immediately however only returns raw text. You lose structured field extraction like Name, Company, Phone.App1 currently expects structured field output from the Business Card model, switching to the Read model would break the app logic unless you redesign App1.

Comment 8

ID: 1354602 User: miai74uu Badges: - Relative Date: 1 year ago Absolute Date: Mon 10 Feb 2025 19:12 Selected Answer: B Upvotes: 1

Document Intelligence supports more sophisticated and modular analysis capabilities. Use the add-on features to extend the results to include more features extracted from your documents.
- barcodes

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept/add-on-capabilities?view=doc-intel-4.0.0&tabs=rest-api

Comment 9

ID: 1334697 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Tue 31 Dec 2024 09:30 Selected Answer: B Upvotes: 2

Option A (upgrade the business card model to v3.0) isn't the right choice because even though the newer version of the business card model might include improvements, it doesn't specifically address the need to interpret QR codes.

Implementing the read model (option B) is the most direct way to ensure that App1 can interpret QR codes without requiring additional changes or custom models. The read model includes the capability to read barcodes and QR codes, providing the functionality you need with minimal administrative effort.

Comment 10

ID: 1333516 User: TimZ1010 Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sun 29 Dec 2024 14:09 Selected Answer: B Upvotes: 3

B is the answer

Comment 11

ID: 1331057 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Tue 24 Dec 2024 09:42 Selected Answer: A Upvotes: 3

To ensure that App1 can interpret QR codes with minimal administrative effort, the best approach would be to:

A. Upgrade the business card model to v3.0.

The business card model version 3.0 includes enhanced capabilities, including the ability to interpret QR codes, which would directly address your requirement without needing to deploy additional models or implement complex configurations.

This upgrade will streamline the process, leveraging the existing capabilities of the business card model and minimizing the need for additional administrative tasks.

Comment 12

ID: 1330934 User: MASANASA Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Mon 23 Dec 2024 21:06 Selected Answer: B Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept/add-on-capabilities?view=doc-intel-4.0.0&tabs=sample-code

Comment 12.1

ID: 1330935 User: MASANASA Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Mon 23 Dec 2024 21:08 Selected Answer: - Upvotes: 2

poller = document_intelligence_client.begin_analyze_document(
"prebuilt-read",

Comment 13

ID: 1322800 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 15:58 Selected Answer: B Upvotes: 2

My answer is based on the model v4.0.

According to my link:
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/model-overview?view=doc-intel-4.0.0#model-analysis-features

Barcode isn't supported in the prebult Business card model. It's optional in the Read model and enabled in the Contract model. I go for the contract model because of the minimize administrative effort. I will cost more to set it up from the start but since Read can't ensure the QR-reading but if the QR-read fails there will be manual labour fixing it, which can potentially be a lot of work.

Comment 13.1

ID: 1322801 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 15:59 Selected Answer: - Upvotes: 1

Omg, my answer should be D, not B

Comment 14

ID: 1312311 User: Alan_CA Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 14 Nov 2024 20:43 Selected Answer: B Upvotes: 3

Add-on capabilities (includes QR codes extraction) are available within all models except for the Business card model
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept/add-on-capabilities?view=doc-intel-3.1.0

Comment 15

ID: 1310459 User: AmitSonal Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 12 Nov 2024 07:23 Selected Answer: B Upvotes: 2

To ensure that App1 can interpret QR codes with minimal administrative effort, you should implement the read model (Option B).
The read model in Azure AI Document Intelligence is designed to extract text from various types of documents, including QR codes12.
Upgrading the business card model to v3.0 (Option A) or deploying a custom model (Option C) would involve more administrative effort and may not specifically address the need to interpret QR codes.
Implementing the contract model (Option D) is not relevant to this requirement.

11. AI-102 Topic 8 Question 3

Sequence
51
Discussion ID
149674
Source URL
https://www.examtopics.com/discussions/microsoft/view/149674-exam-ai-102-topic-8-question-3-discussion/
Posted By
jafaca
Posted At
Oct. 17, 2024, 2:06 p.m.

Question

You have an Azure subscription.

You are building a social media app that will enable users to share images.

You need to ensure that inappropriate content uploaded by the users is blocked. The solution must minimize development effort.

What are two tools that you can use? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

  • A. Azure AI Document Intelligence
  • B. Microsoft Defender for Cloud Apps
  • C. Azure AI Content Safety
  • D. Azure AI Vision
  • E. Azure AI Custom Vision

Suggested Answer

CD

Answer Description Click to expand


Community Answer Votes

Comments 9 comments Click to expand

Comment 1

ID: 1314251 User: 3fbc31b Badges: Highly Voted Relative Date: 1 year, 3 months ago Absolute Date: Mon 18 Nov 2024 22:53 Selected Answer: CD Upvotes: 5

The question mentions "must minimize development effort". That means you should lean away from anything custom.

Answers are C and D.

Comment 2

ID: 1628579 User: harshad883 Badges: Most Recent Relative Date: 3 months, 2 weeks ago Absolute Date: Wed 26 Nov 2025 19:49 Selected Answer: CD Upvotes: 1

Answers are C and D.

Comment 3

ID: 1400440 User: Mattt Badges: - Relative Date: 11 months, 4 weeks ago Absolute Date: Wed 19 Mar 2025 10:10 Selected Answer: CD Upvotes: 1

C&D are correct

Comment 4

ID: 1304286 User: tae893 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 05:14 Selected Answer: - Upvotes: 1

A, D
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-detecting-adult-content

Comment 4.1

ID: 1304287 User: tae893 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 05:15 Selected Answer: - Upvotes: 2

Sorry C, D is correct

Comment 5

ID: 1304223 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 00:39 Selected Answer: CD Upvotes: 2

Correct answers are C and D, verified using CoPilot:

The two tools you can use to ensure inappropriate content is blocked while minimizing development effort are:

C. Azure AI Content Safety and D. Azure AI Vision.

Azure AI Content Safety provides advanced algorithms for processing images and text to detect and block harmful content. Azure AI Vision offers computer vision capabilities to analyze images and detect inappropriate content

Comment 6

ID: 1303891 User: Fefnut Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 28 Oct 2024 09:59 Selected Answer: - Upvotes: 1

CD
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-detecting-adult-content

Comment 7

ID: 1303856 User: e41f7aa Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 28 Oct 2024 08:10 Selected Answer: CD Upvotes: 1

Answer is C & D

Comment 8

ID: 1299194 User: jafaca Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Thu 17 Oct 2024 14:06 Selected Answer: C Upvotes: 2

Azure AI Vision & Content Safety

12. AI-102 Topic 1 Question 67

Sequence
52
Discussion ID
135030
Source URL
https://www.examtopics.com/discussions/microsoft/view/135030-exam-ai-102-topic-1-question-67-discussion/
Posted By
audlindr
Posted At
March 2, 2024, 1:56 a.m.

Question

You have a Microsoft OneDrive folder that contains a 20-GB video file named File1.avi.

You need to index File1.avi by using the Azure Video Indexer website.

What should you do?

  • A. Upload File1.avi to the www.youtube.com webpage, and then copy the URL of the video to the Azure AI Video Indexer website.
  • B. Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website.
  • C. From OneDrive, create a download link, and then copy the link to the Azure AI Video Indexer website.
  • D. From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website.

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 20 comments Click to expand

Comment 1

ID: 1163853 User: audlindr Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 01:56 Selected Answer: C Upvotes: 16

https://learn.microsoft.com/en-us/azure/azure-video-indexer/odrv-download

Copy the embed code and extract only the URL part including the key. For example:

https://onedrive.live.com/embed?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk

Replace embed with download. You will now have a url that looks like this:

https://onedrive.live.com/download?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk

Now enter this URL in the Azure AI Video Indexer website in the URL field.

Comment 1.1

ID: 1342645 User: nkorczynski Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Sat 18 Jan 2025 19:32 Selected Answer: - Upvotes: 3

"You should be able to view or hear the file from placing the URL in the browser location field."
Because it's download link, C is wrong. D Is correct.

Comment 2

ID: 1332786 User: pabsinaz Badges: Highly Voted Relative Date: 1 year, 2 months ago Absolute Date: Sat 28 Dec 2024 05:59 Selected Answer: D Upvotes: 11

Option D allows you to directly link the video file from OneDrive to Azure Video Indexer without needing to download and upload large files, saving time and bandwidth. This method provides a seamless way to access and process large files stored in OneDrive.

Options A, B, and C either involve unnecessary steps or do not directly integrate with Azure Video Indexer as efficiently.

Comment 3

ID: 1628386 User: harshad883 Badges: Most Recent Relative Date: 3 months, 2 weeks ago Absolute Date: Tue 25 Nov 2025 20:04 Selected Answer: D Upvotes: 2

Azure Video Indexer supports indexing videos from:

Local upload
Public URLs (including OneDrive sharing links)
Azure Blob Storage


For large files (like 20 GB), uploading manually is inefficient. Instead, providing a shared link from OneDrive allows Video Indexer to fetch the file directly.
A download link (option C) often expires or requires authentication, while a sharing link is designed for external access.

Comment 4

ID: 1365435 User: Mattt Badges: - Relative Date: 1 year ago Absolute Date: Wed 05 Mar 2025 15:41 Selected Answer: C Upvotes: 1

C is correct

Comment 5

ID: 1336326 User: 201 Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sat 04 Jan 2025 11:05 Selected Answer: D Upvotes: 5

not C. - because is only one-time only and temporary.

Comment 6

ID: 1326599 User: MASANASA Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sat 14 Dec 2024 20:56 Selected Answer: C Upvotes: 1

should be C. https://learn.microsoft.com/ja-jp/azure/azure-video-indexer/considerations-when-use-at-scale

Comment 7

ID: 1321706 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 04 Dec 2024 06:20 Selected Answer: B Upvotes: 2

Given this documentation:
https://learn.microsoft.com/en-us/azure/azure-video-indexer/upload-index-media

You can't:
- Use URL's from streaming services like Youtube.
- Use URL's where the file size is > 2 GB (in the troubleshooting section).

Only remaining option is B.

Comment 7.1

ID: 1322011 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 04 Dec 2024 17:09 Selected Answer: - Upvotes: 1

And then I found this:
https://learn.microsoft.com/en-us/azure/azure-video-indexer/considerations-when-use-at-scale
...written by the same person. Amazing. Have to change my mind to C.

Comment 8

ID: 1319990 User: friendlyvlad Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sat 30 Nov 2024 02:27 Selected Answer: D Upvotes: 5

This method allows you to directly use the file from OneDrive without the need to download and re-upload it, saving time and bandwidth.

Comment 9

ID: 1307394 User: Alan_CA Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 05 Nov 2024 15:15 Selected Answer: B Upvotes: 1

Tried it with a sharing link for a video in onedrive, it does not work :
"The URL is either invalid, not secured, or there was no file found in the URL"

Comment 10

ID: 1297441 User: 4371883 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 14 Oct 2024 12:16 Selected Answer: - Upvotes: 4

Got this question in Oct 2024 exam, except the "download link" is replaced by "embed link".

Comment 11

ID: 1296877 User: Afsjoaquim Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 13 Oct 2024 14:02 Selected Answer: D Upvotes: 4

There isn't a separate "download link" option; sharing links serve this purpose. OneDrive uses sharing links to provide access, which can also allow downloading.

Comment 12

ID: 1276129 User: MostafaAbdellahAhmed Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 17:02 Selected Answer: - Upvotes: 2

D. From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website.
Explanation:
Azure Video Indexer can index videos from URLs provided by users. For Azure Video Indexer to access the video, the link should be a public sharing link that does not require authentication. Creating a sharing link from OneDrive provides this functionality, allowing Azure Video Indexer to fetch the video for indexing.

Comment 13

ID: 1275856 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 07:13 Selected Answer: - Upvotes: 2

These kind of naive questions do not consider that most people who do these exams are corporate employees (following illogical certifcation policies) and they do not use video indexing from one-drive data. Such thoughtless questions. Really, i should read how one-drive generates urls??

Comment 13.1

ID: 1275857 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 07:15 Selected Answer: - Upvotes: 1

It cannot be the download link as I guess that will not be publicly accessible. I hope one-drive share link is publicly accessible with a sharing-token (or whatever).

Comment 14

ID: 1265571 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Wed 14 Aug 2024 10:29 Selected Answer: - Upvotes: 1

I think C and D are equivalent

Comment 15

ID: 1265463 User: Moneybing Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Wed 14 Aug 2024 03:21 Selected Answer: - Upvotes: 1

Answer could be D, according to Microsoft Copilot.
Azure AI Video Indexer typically requires a sharing link to a video hosted online.
Azure AI Video Indexer doesn’t require a separate download link.

Azure AI Video Indexer has File Size Limit:
-If you’re uploading a file directly from your device, the maximum file size allowed is 2 GB.
-When uploading a video from a URL, the file size limit increases to 30 GB.

Comment 16

ID: 1240635 User: fqc Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 02 Jul 2024 10:17 Selected Answer: - Upvotes: 1

B. Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website.
Here's why this is the correct approach:
File size: The video file is 20 GB, which is a large file. Azure Video Indexer has file size limitations for direct uploads.
Direct access: Azure Video Indexer typically requires direct access to the video file for processing. Downloading the file to a local computer and then uploading it ensures that the Video Indexer has direct access to the entire file.
Compatibility: This method ensures that the file is in a format that Azure Video Indexer can directly process without relying on external services or links.
Control: Downloading and uploading manually gives you more control over the process and allows you to verify the file before indexing.

Comment 17

ID: 1235497 User: Dazigster Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 18:22 Selected Answer: B Upvotes: 3

When I took the exam 22/06/24, option C has been removed, and replaced with "From Onedrive, create an embed link...". Very probably this means C was NEVER correct.

I also note that the link audlindr cites makes no mention of anything other than uploading (neither does the official az-102 MS course). Given there is no requirement to "minimise time/effort" or automate, I don't see how hacking and pasting URLs are necessary in this scenario when down then uploading will DEFINITELY work - everything else is a punt.

FWIW I chose option B and got 90% for the "image and video" component (i.e. got one image/video question wrong, and I'm 99% certain it wasn't this one). Make of that what you will.

13. AI-102 Topic 2 Question 3

Sequence
53
Discussion ID
60330
Source URL
https://www.examtopics.com/discussions/microsoft/view/60330-exam-ai-102-topic-2-question-3-discussion/
Posted By
SuperPetey
Posted At
Aug. 23, 2021, 8:13 a.m.

Question

HOTSPOT -
You have a Computer Vision resource named contoso1 that is hosted in the West US Azure region.
You need to use contoso1 to make a different size of a product photo by using the smart cropping feature.
How should you complete the API URL? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand


Comments 25 comments Click to expand

Comment 1

ID: 430905 User: czmiel24 Badges: Highly Voted Relative Date: 4 years, 6 months ago Absolute Date: Tue 24 Aug 2021 18:33 Selected Answer: - Upvotes: 65

The second one should be generate Thumbnail imho.

Comment 1.1

ID: 438316 User: ziizai Badges: - Relative Date: 4 years, 6 months ago Absolute Date: Fri 03 Sep 2021 07:44 Selected Answer: - Upvotes: 15

yes, the question is exactly the sample here
https://docs.microsoft.com/en-us/rest/api/computervision/3.1/generate-thumbnail/generate-thumbnail#examples

Comment 2

ID: 468165 User: VulcanMXNY Badges: Highly Voted Relative Date: 4 years, 4 months ago Absolute Date: Tue 26 Oct 2021 18:38 Selected Answer: - Upvotes: 49

Both answers are incorrect.

The correct answers are:
https://contoso1.cognitiveservices.azure.com/
AND
generateThumbnail

westus.dev.cognitive.microsoft.com wouldn't be a correct Computer Vision endpoint if the resource name is contoso1.

Also, per the documentation, areaOfInterest "returns a bounding box around the most important area of the image", it doesn't return a different size photo (https://docs.microsoft.com/en-us/rest/api/computervision/3.1/get-area-of-interest).

Comment 2.1

ID: 805762 User: AzureJobsTillRetire Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Feb 2023 00:56 Selected Answer: - Upvotes: 1

I agree with both answers here. The example https://westus.api.cognitive.microsoft.com is just an example and it needs to be changed to use the source in real which is contoso1.

Comment 2.1.1

ID: 959313 User: dazdzadzadzaazd Badges: - Relative Date: 2 years, 7 months ago Absolute Date: Sat 22 Jul 2023 08:59 Selected Answer: - Upvotes: 11

Today (july 2023), both regional and resource endpoints are supported. So both are correct :
https://contoso1.cognitiveservices.azure.com/
AND
https://westus.api.cognitive.microsoft.com

Doc : https://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-custom-subdomains#what-if-an-sdk-asks-me-for-the-region-for-a-resource
"Regional endpoints and custom subdomain names are both supported and can be used interchangeably."

I tested it by creating a custom vision resource and used it with both endpoints.

Comment 2.1.1.1

ID: 1179755 User: Ody Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 02:17 Selected Answer: - Upvotes: 1

Yes, I think the key thing is that the key is specified.

Comment 2.2

ID: 862962 User: MDawson Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Thu 06 Apr 2023 14:42 Selected Answer: - Upvotes: 3

contoso1 is a Computer Vision resource, so you would not specify /vision in the URL. Therefore I think the correct answer must be westus.api.cognitive.microsoft.com

Comment 2.3

ID: 631290 User: ppo12 Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Thu 14 Jul 2022 12:15 Selected Answer: - Upvotes: 10

I agree with generateThumbnail, however first answer provided by ET should be correct https://westus.api.cognitive.microsoft.com as shown in https://docs.microsoft.com/en-us/rest/api/computervision/3.1/generate-thumbnail/generate-thumbnail?tabs=HTTP#examples

Comment 3

ID: 1628390 User: harshad883 Badges: Most Recent Relative Date: 3 months, 2 weeks ago Absolute Date: Tue 25 Nov 2025 20:28 Selected Answer: - Upvotes: 1

https://westus.api.cognitive.microsoft.com/vision/v3.1/generateThumbnail?width={width}&height={height}&smartCropping=true

Comment 4

ID: 1411931 User: ouhshuo Badges: - Relative Date: 11 months, 2 weeks ago Absolute Date: Sun 30 Mar 2025 06:07 Selected Answer: - Upvotes: 1

For this one, there is an update since 1st of July 2019: New resources will use custom subdomain names. therefore the endpoint will be contoso1.xxx

Comment 5

ID: 1400194 User: Jaspal Badges: - Relative Date: 11 months, 4 weeks ago Absolute Date: Tue 18 Mar 2025 16:40 Selected Answer: - Upvotes: 1

The correct answers are:
https://contoso1.cognitiveservices.azure.com/
AND
generateThumbnail

Comment 6

ID: 1235337 User: SAMBIT Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 12:52 Selected Answer: - Upvotes: 3

POST https://westus.api.cognitive.microsoft.com/vision/v3.2/generateThumbnail?width=500&height=500&smartCropping=True


{
"url": "{url}"
}

Comment 7

ID: 1220251 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:51 Selected Answer: - Upvotes: 3

1. contoso1
2. gererateThumbnail

Comment 8

ID: 1184513 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 28 Mar 2024 03:40 Selected Answer: - Upvotes: 4

Final Answer:
1. https://contoso1.cognitiveservices.azure.com/
2. generateThumbnail

Comment 9

ID: 1175057 User: Murtuza Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Sat 16 Mar 2024 16:36 Selected Answer: - Upvotes: 2

API URL:
The base URL for the Analyze Image 4.0 API is typically:
https://<region>.api.cognitive.microsoft.com/vision/v4.0/analyze

Replace <region> with the appropriate Azure region (in this case, West US).

Comment 10

ID: 936149 User: Pixelmate Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 07:19 Selected Answer: - Upvotes: 4

was on exam 28/06/2023

Comment 11

ID: 914836 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sun 04 Jun 2023 19:21 Selected Answer: - Upvotes: 3

I simulated this in Azure Portal:
1. endpoint is https://contoso1.cognitiveservices.azure.com/
2. thumbnail

Comment 12

ID: 865932 User: Sachz88 Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Mon 10 Apr 2023 03:49 Selected Answer: - Upvotes: 5

https://contoso1.cognitiveservices.azure.com/ is correct.

Context from ChatGPT:
westus.api.cognitive.microsoft.com is also a valid endpoint for the Cognitive Services APIs, including the Computer Vision API. However, it is important to note that this endpoint is deprecated and will be retired on October 31, 2024.

Therefore, it is recommended to use the newer endpoint format https://<resource-name>.cognitiveservices.azure.com/ for any new development work. This endpoint format follows a more standard Azure resource URL pattern and is also more flexible in terms of geographic distribution and availability.

Hope it helps.

Comment 13

ID: 819537 User: NNU Badges: - Relative Date: 3 years ago Absolute Date: Thu 23 Feb 2023 19:20 Selected Answer: - Upvotes: 1

The first is https://contoso1.cognitiveservices.azure.com the second is generateThumbnail
POST https://*.cognitiveservices.azure.com/vision/v3.2/generateThumbnail?width=100&height=100&smartCropping=true&model-version=latest HTTP/1.1
Host: *.cognitiveservices.azure.com
Content-Type: application/json

{"url":"http://example.com/images/test.jpg"}

Comment 14

ID: 778439 User: KingChuang Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Tue 17 Jan 2023 02:15 Selected Answer: - Upvotes: 3

on my exam (2023-01-16 Passed)

My Answer:
https://westus.api.cognitive.microsoft.com
But I think this is wrong.Because Question request use contoso1!
So correct answer is :
https://contoso1.cognitiveservices.azure.com/

Comment 14.1

ID: 779223 User: ap1234pa Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Tue 17 Jan 2023 19:47 Selected Answer: - Upvotes: 1

Hello.. I have exam tomorrow. Can you suggest if ET questions were on exam?

Comment 15

ID: 636584 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 25 Jul 2022 11:00 Selected Answer: - Upvotes: 4

Westus
generateThumbnail

https://docs.microsoft.com/en-gb/azure/cognitive-services/computer-vision/how-to/generate-thumbnail#call-the-generate-thumbnail-api

curl -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -o <thumbnailFile> -H "Content-Type: application/json" "https://westus.api.cognitive.microsoft.com/vision/v3.2/generateThumbnail?width=100&height=100&smartCropping=true" -d "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Shorkie_Poo_Puppy.jpg/1280px-Shorkie_Poo_Puppy.jpg\"}"

Comment 16

ID: 636047 User: RamonKaus Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Sun 24 Jul 2022 14:34 Selected Answer: - Upvotes: 1

First one is contoso.cognitive services. Just checked my own script and cognitiive services uses ur rg name in the endpoint URI.

Comment 16.1

ID: 636048 User: RamonKaus Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Sun 24 Jul 2022 14:34 Selected Answer: - Upvotes: 1

Second one is obv. generateThumbnail

Comment 17

ID: 633371 User: JDarshan Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Tue 19 Jul 2022 07:29 Selected Answer: - Upvotes: 1

For first dropdown, 3rd option works with cognitive service key and computer vision key as well. whereas 2nd option works with only computer vision key. so answer 3rd works in both situation. therefor i'll go with https://westus.api.cognitive.microsoft.com/vision/v3.1/generateThumbnail?width=500&height=500&smartCropping=True

14. AI-102 Topic 1 Question 5

Sequence
56
Discussion ID
54748
Source URL
https://www.examtopics.com/discussions/microsoft/view/54748-exam-ai-102-topic-1-question-5-discussion/
Posted By
LKLK10
Posted At
June 6, 2021, 11:12 p.m.

Question

HOTSPOT -
You need to create a new resource that will be used to perform sentiment analysis and optical character recognition (OCR). The solution must meet the following requirements:
✑ Use a single key and endpoint to access multiple services.
✑ Consolidate billing for future services that you might use.
✑ Support the use of Computer Vision in the future.
How should you complete the HTTP request to create the new resource? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand

Box 1: PUT -
Sample Request: PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-rg/providers/
Microsoft.DeviceUpdate/accounts/contoso?api-version=2020-03-01-preview
Incorrect Answers:
PATCH is for updates.

Box 2: CognitiveServices -
Microsoft Azure Cognitive Services provide us to use its pre-trained models for various Business Problems related to Machine Learning.
List of Different Services are:
✑ Decision
✑ Language (includes sentiment analysis)
✑ Speech
✑ Vision (includes OCR)
✑ Web Search
Reference:
https://docs.microsoft.com/en-us/rest/api/deviceupdate/resourcemanager/accounts/create https://www.analyticsvidhya.com/blog/2020/12/microsoft-azure-cognitive-services-api-for-ai-development/

Comments 22 comments Click to expand

Comment 1

ID: 380572 User: WillyMac Badges: Highly Voted Relative Date: 4 years, 9 months ago Absolute Date: Sat 12 Jun 2021 17:57 Selected Answer: - Upvotes: 28

I think answer is correct.
PUT: puts a file or resource at a specific URI, and exactly at that URI.
If there's already a file or resource at that URI, PUT replaces that file or resource.
If there is no file or resource there, PUT creates one.
POST: POST sends data to a specific URI and expects the resource at that URI to handle the request.

Comment 1.1

ID: 384300 User: jeffangel28 Badges: - Relative Date: 4 years, 8 months ago Absolute Date: Thu 17 Jun 2021 17:49 Selected Answer: - Upvotes: 1

It's seems correct, the link shows a similar example
https://docs.microsoft.com/en-us/rest/api/deviceupdate/resourcemanager/accounts/create

Comment 1.1.1

ID: 415231 User: YipingRuan Badges: - Relative Date: 4 years, 7 months ago Absolute Date: Tue 27 Jul 2021 08:50 Selected Answer: - Upvotes: 3

Yes, PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}?api-version=2021-04-30

Comment 1.2

ID: 1276751 User: Ody Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Mon 02 Sep 2024 15:40 Selected Answer: - Upvotes: 1

Resource identification: PUT operations can only operate on the resource identified by the URL provided. POST operations can be performed on any server-side resource, regardless of the URL.

Use cases: PUT is suitable for scenarios where you have full control over resource replacement. POST is suitable for scenarios where you need to submit data for processing, or create new resources without specifying a URI.

Comment 2

ID: 396366 User: Adedoyin_Simeon Badges: Highly Voted Relative Date: 4 years, 8 months ago Absolute Date: Thu 01 Jul 2021 23:23 Selected Answer: - Upvotes: 15

Although in Web Programming, and API dev, PUT is an http(s) request method for an update operation. I can however create a resource when there is no resource to update. I don't know why precisely but the method used by Azure to actually make a REST API request to create a resource is actually "PUT". So, the answers are correct.
See Ref:
https://docs.microsoft.com/en-us/rest/api/resources/resources/create-or-update

Comment 3

ID: 1627800 User: nkcodescribbler Badges: Most Recent Relative Date: 3 months, 2 weeks ago Absolute Date: Sun 23 Nov 2025 02:48 Selected Answer: - Upvotes: 2

1. PUT
2. CognitiveServices

Comment 4

ID: 1235713 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 09:15 Selected Answer: - Upvotes: 2

The answer is
PUT
congnitiveservices
Why PUT? Because while POST can also create new resources like PUT but its not idempotent like PUT hence its always better to us PUT when creating services.

Comment 5

ID: 1219582 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 27 May 2024 15:24 Selected Answer: - Upvotes: 2

PUT and CognitiveSearvices

Comment 6

ID: 1213813 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 15:37 Selected Answer: - Upvotes: 1

Cognitive Services are "AI parts" that mimic human cognition (Cognitive) and are immediately available as WebAPI. Embedded applications can build cognitive solutions for vision, speech, language, decision making, and search without the need for technical knowledge in AI or data science.

Comment 7

ID: 1200560 User: duyle2906 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Tue 23 Apr 2024 10:41 Selected Answer: - Upvotes: 1

When i differed with ChatGPT about PUT being correct, this is what I got

When creating a new resource in Azure, the PUT method is used to update or create the resource with the provided configuration. In the case of Azure Cognitive Services, you typically use the PUT method to provision a new instance of the service with the specified settings.

Therefore, the correct HTTP request to create the new resource should use the PUT method, not POST.

Comment 8

ID: 1178842 User: pwang009 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 21 Mar 2024 00:18 Selected Answer: - Upvotes: 1

Answer from ChatGPT, PUT and cognitiveservices
curl -X PUT -H "Authorization: Bearer {access_token}" -H "Content-Type: application/json" \
-d '{"kind":"TextAnalytics","sku":{"name":"F0"}}' \
"https://management.azure.com/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.CognitiveServices/accounts/{api_name}?api-version=2021-04-30"

Comment 9

ID: 1171965 User: Murtuza Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Tue 12 Mar 2024 19:54 Selected Answer: - Upvotes: 1

The given answers are CORRECT

Comment 10

ID: 1016311 User: sara_aras Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Mon 25 Sep 2023 04:49 Selected Answer: - Upvotes: 2

PUT is correct.
https://learn.microsoft.com/en-us/rest/api/resources/resources/create-or-update

Comment 11

ID: 987435 User: f2mrmlwr Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Tue 22 Aug 2023 14:31 Selected Answer: - Upvotes: 2

Yes, PUT
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/accountmanagement/accounts/create?tabs=HTTP

Comment 11.1

ID: 1234413 User: exnaniantwort Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 14:42 Selected Answer: - Upvotes: 1

This is the correct documentation link for creating cognitive services account. I don't know why all the other comments just quote those to create other resources.

Comment 12

ID: 947633 User: RegTemp Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Mon 10 Jul 2023 02:47 Selected Answer: - Upvotes: 1

1. PUT
2. CognitiveServices

Comment 13

ID: 627551 User: Eltooth Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Tue 05 Jul 2022 19:03 Selected Answer: - Upvotes: 3

PUT
CognativeServices

Comment 14

ID: 516043 User: sumanshu Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Mon 03 Jan 2022 21:49 Selected Answer: - Upvotes: 2

PATCH is eliminated (It is only used for update). I think we can use both POST and PUT (to create resources). But good to use PUT (just in case if API has been re-trigerred, So it will not fail.

And 2nd answer is "Cognitive services" which provides a lot of models. (So we can use computer vision as well). if we select only computer vision, then we can't use Sentiment analysis and OCR (for which we are trying to create a resource).

Comment 15

ID: 410028 User: LPreethi Badges: - Relative Date: 4 years, 7 months ago Absolute Date: Tue 20 Jul 2021 07:25 Selected Answer: - Upvotes: 4

May be the reason to use PUT is because , PUT requests must be idempotent. If a client submits the same PUT request multiple times, the results should always be the same (the same resource will be modified with the same values). POST and PATCH requests are not guaranteed to be idempotent.

Comment 16

ID: 376330 User: LKLK10 Badges: - Relative Date: 4 years, 9 months ago Absolute Date: Sun 06 Jun 2021 23:12 Selected Answer: - Upvotes: 4

Shouldn’t the first one be POST? It says it’s a new resource created, not an existing one updated.

Comment 16.1

ID: 516039 User: sumanshu Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Mon 03 Jan 2022 21:45 Selected Answer: - Upvotes: 1

PUT can also be used to CREATE resource

Comment 16.2

ID: 591682 User: Jojo_star Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Mon 25 Apr 2022 14:22 Selected Answer: - Upvotes: 1

I agree, post is the answer if we want create new resource, below is link for another services, but still under cognitive services. https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription

15. AI-102 Topic 4 Question 31

Sequence
58
Discussion ID
149767
Source URL
https://www.examtopics.com/discussions/microsoft/view/149767-exam-ai-102-topic-4-question-31-discussion/
Posted By
gadez1234
Posted At
Oct. 18, 2024, 11:27 p.m.

Question

DRAG DROP
-

You have an Azure subscription that contains a storage account named sa1 and an Azure AI Document Intelligence resource named DI1.

You need to create and train a custom model in DI1 by using Document Intelligence Studio. The solution must minimize development effort.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Suggested Answer

image
Answer Description Click to expand


Comments 7 comments Click to expand

Comment 1

ID: 1299808 User: gadez1234 Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Fri 18 Oct 2024 23:27 Selected Answer: - Upvotes: 16

I think the answer is as follows.

Create a custom model project and link the project to sa1
Upload five sample documents.
Apply labels to the sample documents.
Train and test the model.

Comment 1.1

ID: 1299967 User: cheetah313 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Sat 19 Oct 2024 11:31 Selected Answer: - Upvotes: 3

Since the upload of the documents is done to the storage account, which is already present before we create the custom model project, I think the order in the original answer given is correct.

Comment 1.1.1

ID: 1322781 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 15:16 Selected Answer: - Upvotes: 5

What you think isn't relevant compared to documentation. Read the documentation and find out Gadez is correct.

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio?view=doc-intel-4.0.0

Comment 2

ID: 1410237 User: sooss Badges: Most Recent Relative Date: 11 months, 3 weeks ago Absolute Date: Wed 26 Mar 2025 04:03 Selected Answer: - Upvotes: 1

Given answer seems correct since documents would be uploaded in SA already. https://learn.microsoft.com/en-us/training/modules/work-form-recognizer/9-form-recognizer-studio

Comment 3

ID: 1304297 User: Christian_garcia_martin Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 06:12 Selected Answer: - Upvotes: 4

If you did not create custom model project , where do you upload images ? so for me:
Create a custom model project and link the project to sa1
Upload five sample documents.
Apply labels to the sample documents.
Train and test the model.

Comment 3.1

ID: 1605677 User: cesaraldo Badges: - Relative Date: 6 months, 1 week ago Absolute Date: Tue 02 Sep 2025 21:00 Selected Answer: - Upvotes: 1

To the storage account, which has already been provisioned. The order of the two first actions is not relevant. However, better be on the safe side rather than having to challenge the answer...

Comment 3.1.1

ID: 1624043 User: CMal Badges: - Relative Date: 4 months ago Absolute Date: Fri 07 Nov 2025 22:38 Selected Answer: - Upvotes: 1

Christian is correct in the order. The project needs to be created first because it defines the workspace and configuration that tells Document Intelligence Studio where to look for your documents, how to organize them, and how to manage the labeling and training process.

16. AI-102 Topic 1 Question 45

Sequence
59
Discussion ID
106686
Source URL
https://www.examtopics.com/discussions/microsoft/view/106686-exam-ai-102-topic-1-question-45-discussion/
Posted By
Mike19D
Posted At
April 19, 2023, 10:09 a.m.

Question

You have an Azure IoT hub that receives sensor data from machinery.

You need to build an app that will perform the following actions:

• Perform anomaly detection across multiple correlated sensors.
• Identify the root cause of process stops.
• Send incident alerts.

The solution must minimize development time.

Which Azure service should you use?

  • A. Azure Metrics Advisor
  • B. Form Recognizer
  • C. Azure Machine Learning
  • D. Anomaly Detector

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 20 comments Click to expand

Comment 1

ID: 936127 User: Pixelmate Badges: Highly Voted Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 07:13 Selected Answer: - Upvotes: 12

This came in the exam 28/06/2023

Comment 2

ID: 1124317 User: josebernabeo Badges: Highly Voted Relative Date: 2 years, 1 month ago Absolute Date: Tue 16 Jan 2024 16:21 Selected Answer: - Upvotes: 6

"Starting on the 20th of September, 2023 you won’t be able to create new Metrics Advisor resources. The Metrics Advisor service is being retired on the 1st of October, 2026."

source: https://learn.microsoft.com/en-us/azure/ai-services/metrics-advisor/overview

Comment 3

ID: 1613788 User: b6efbbc Badges: Most Recent Relative Date: 5 months, 1 week ago Absolute Date: Tue 30 Sep 2025 14:45 Selected Answer: A Upvotes: 1

Azure Anomaly Detector (D): While it supports multivariate detection across up to 300 signals , it focuses primarily on detection rather than providing built-in root cause analysis and alerting capabilities, requiring additional development work

Comment 4

ID: 1604748 User: cesaraldo Badges: - Relative Date: 6 months, 1 week ago Absolute Date: Sun 31 Aug 2025 06:46 Selected Answer: A Upvotes: 1

https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/detecting-methane-leaks-using-azure-metrics-advisor/3254005

Comment 5

ID: 1563737 User: marcellov Badges: - Relative Date: 10 months, 2 weeks ago Absolute Date: Fri 25 Apr 2025 18:38 Selected Answer: D Upvotes: 3

Azure AI Anomaly Detector is the optimal choice for this scenario due to its prebuilt models for multi-sensor correlation analysis and low-code implementation.
Why Not Other Options?
A. Metrics Advisor: Focuses on business metrics (e.g., revenue, web traffic), not IoT sensor data, and is being deprecated.
B. Form Recognizer: Extracts text/data from documents, irrelevant for sensor analysis.
C. Azure Machine Learning: Requires custom model development, increasing time/resources compared to Anomaly Detector’s prebuilt APIs.

Comment 5.1

ID: 1574547 User: MinhHy Badges: - Relative Date: 9 months, 1 week ago Absolute Date: Tue 03 Jun 2025 18:32 Selected Answer: - Upvotes: 1

Starting on the 20th of September, 2023 you won’t be able to create new Metrics Advisor resources. The Metrics Advisor service is being retired on the 1st of October, 2026.

Comment 5.2

ID: 1604747 User: cesaraldo Badges: - Relative Date: 6 months, 1 week ago Absolute Date: Sun 31 Aug 2025 06:46 Selected Answer: - Upvotes: 2

Do not agree. Azure Metrics Advisor is (was) the ideal option: https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/detecting-methane-leaks-using-azure-metrics-advisor/3254005

Comment 6

ID: 1552873 User: gil906 Badges: - Relative Date: 11 months, 1 week ago Absolute Date: Sun 06 Apr 2025 01:35 Selected Answer: D Upvotes: 4

Azure Metrics Advisor was retired on September 20, 2023 and replaced by Azure AI Anomaly Detector. If the Exam is updated, is D

Comment 6.1

ID: 1604739 User: cesaraldo Badges: - Relative Date: 6 months, 1 week ago Absolute Date: Sun 31 Aug 2025 06:45 Selected Answer: - Upvotes: 1

That's incorrect. azure AI Metrics Advisor is built on top of Azure AI Anomaly Detector. BOTH services will be retired on October 2026.

Comment 7

ID: 1387378 User: tejas29 Badges: - Relative Date: 1 year ago Absolute Date: Tue 11 Mar 2025 13:02 Selected Answer: A Upvotes: 1

To build an app that performs anomaly detection across multiple correlated sensors, identifies the root cause of process stops, and sends incident alerts while minimizing development time, you should use Azure Metrics Advisor.

Azure Metrics Advisor is designed for monitoring and detecting anomalies in time-series data, making it suitable for handling multiple correlated sensors. It also provides root cause analysis and incident alerting capabilities

Comment 8

ID: 1313303 User: 9cc71b6 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 17 Nov 2024 00:31 Selected Answer: D Upvotes: 6

Should be A, but it is retired. D

Comment 9

ID: 1297828 User: Sujeeth Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 14 Oct 2024 23:56 Selected Answer: - Upvotes: 1

A. Azure Metrics Advisor. It’s designed for anomaly detection across multiple data streams, can identify root causes, and send incident alerts—all while minimizing development time

Comment 10

ID: 1291857 User: AnnaR Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 01 Oct 2024 09:43 Selected Answer: - Upvotes: 1

Should be A, But starting on the 20th of September, 2023 you won’t be able to create new Metrics Advisor resources. The Metrics Advisor service is being retired on the 1st of October, 2026.
Source: https://learn.microsoft.com/en-us/azure/ai-services/metrics-advisor/overview

Comment 11

ID: 1284994 User: AzureGeek79 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 03:08 Selected Answer: - Upvotes: 1

To build an app that performs anomaly detection across multiple correlated sensors, identifies the root cause of process stops, and sends incident alerts while minimizing development time, you should use:

D. Anomaly Detector

The Anomaly Detector service is designed to handle multivariate anomaly detection, which means it can analyze multiple correlated signals and identify anomalies. It also helps in pinpointing the root cause of anomalies and can be integrated with alerting mechanisms to send incident alerts⁵⁶. This service is well-suited for your requirements and minimizes development effort by providing pre-built models and easy integration.

Comment 12

ID: 1276832 User: Ody Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Mon 02 Sep 2024 17:03 Selected Answer: - Upvotes: 1

Azure AI Metrics Advisor

https://learn.microsoft.com/en-us/azure/ai-services/metrics-advisor/overview

Comment 13

ID: 1250049 User: SAMBIT Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Thu 18 Jul 2024 02:58 Selected Answer: - Upvotes: 1

Streaming data Metric Advisor where as batch data Anamoly detector.

Comment 14

ID: 1235158 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:05 Selected Answer: A Upvotes: 1

A is the answer.

Comment 15

ID: 1225257 User: Rubby Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 09:51 Selected Answer: - Upvotes: 1

D is Correct for anomaly detection

Comment 16

ID: 1217611 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:47 Selected Answer: A Upvotes: 2

A is right. From Takedajuku perspective, if you study for 4 days and spend 2 days reviewing, you will have a better chance of passing the exam.

Comment 17

ID: 1213791 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 15:08 Selected Answer: - Upvotes: 3

Why is A?

17. AI-102 Topic 2 Question 5

Sequence
68
Discussion ID
60335
Source URL
https://www.examtopics.com/discussions/microsoft/view/60335-exam-ai-102-topic-2-question-5-discussion/
Posted By
SuperPetey
Posted At
Aug. 23, 2021, 8:25 a.m.

Question

DRAG DROP -
You train a Custom Vision model to identify a company's products by using the Retail domain.
You plan to deploy the model as part of an app for Android phones.
You need to prepare the model for deployment.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand


Comments 24 comments Click to expand

Comment 1

ID: 430913 User: czmiel24 Badges: Highly Voted Relative Date: 4 years, 6 months ago Absolute Date: Tue 24 Aug 2021 18:42 Selected Answer: - Upvotes: 44

Actually the model should be retrained prior to publishing:

"From the top of the page, select Train to retrain using the new domain."

So it should be:
1. Change the model domain
2. Retrain
3. Publish

Comment 1.1

ID: 605473 User: DingDongSingSong Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Sun 22 May 2022 14:28 Selected Answer: - Upvotes: 2

Where is the Test step before publishing? After retraining you must test it before publishing it

Comment 1.2

ID: 442242 User: dinesh_tng Badges: - Relative Date: 4 years, 6 months ago Absolute Date: Fri 10 Sep 2021 03:44 Selected Answer: - Upvotes: 5

Yep, Change the model to Retail (Compact). Exporting the Model is an optional step.

Comment 1.3

ID: 645921 User: ninjia Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Fri 12 Aug 2022 15:55 Selected Answer: - Upvotes: 5

Agreed. see reference https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model

Comment 1.4

ID: 442243 User: dinesh_tng Badges: - Relative Date: 4 years, 6 months ago Absolute Date: Fri 10 Sep 2021 03:52 Selected Answer: - Upvotes: 15

Actually all four steps required in the sequence Change, Retrain, Test and Export. Export is also must as model has to be deployed on Android App. If I have to choose three options, I may drop "Test" as that is not mandatory to proceed, but good to have as part of process.

Comment 1.4.1

ID: 1165347 User: Mehe323 Badges: - Relative Date: 2 years ago Absolute Date: Mon 04 Mar 2024 07:25 Selected Answer: - Upvotes: 1

The question states: 'Which three actions should you perform in sequence?'

Comment 2

ID: 561312 User: reachmymind Badges: Highly Voted Relative Date: 4 years ago Absolute Date: Sat 05 Mar 2022 09:53 Selected Answer: - Upvotes: 18

Change the model domain {Retail(compact)}
Retrain the model
Export the model

https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model

Comment 3

ID: 1598780 User: JayatiRC Badges: Most Recent Relative Date: 6 months, 3 weeks ago Absolute Date: Sun 17 Aug 2025 12:03 Selected Answer: - Upvotes: 1

Why testing part is not required?

Comment 4

ID: 1578301 User: StelSen Badges: - Relative Date: 8 months, 4 weeks ago Absolute Date: Tue 17 Jun 2025 14:09 Selected Answer: - Upvotes: 1

1. Change the model domain
2. Retrain the model
3. Export the model

Testing the model is important for quality, but not strictly required for deployment preparation

Comment 5

ID: 1295785 User: AzurePart Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Fri 11 Oct 2024 02:08 Selected Answer: - Upvotes: 3

There is no mention of whether or not they will actually deploy it, although they plan to deploy it.
If they only plan to deploy it, isn't the current answer correct?

1. Change the model domain.
2. Retrain the model.
3. Test the model.

Comment 6

ID: 1252310 User: nithin_reddy Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Sun 21 Jul 2024 10:07 Selected Answer: - Upvotes: 2

Change the model domain Retail(compact)
Retrain the model
Export the model

Comment 7

ID: 1240861 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 02 Jul 2024 16:51 Selected Answer: - Upvotes: 1

But why they ask 3 and answer is 4.
Btw...
1. Change the model domain
2. Retrain
3. Publish

Comment 7.1

ID: 1263007 User: djsoyboi Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Fri 09 Aug 2024 15:12 Selected Answer: - Upvotes: 1

That's my exact question too.

Comment 8

ID: 1226003 User: ARM360 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 07 Jun 2024 11:22 Selected Answer: - Upvotes: 1

You train a Custom Vision model to identify a company's products by using the Retail domain.

You are already using the retail domain, so you technically don't need to change the domain. You are only preparing it for deployment on a mobile app.

So ...

To prepare the Custom Vision model for deployment as part of an app for Android phones, you should perform the following three actions in sequence:

Retrain the model. - Ensure the model is trained on the latest data to improve accuracy.
Test the model. - Validate the model's performance to ensure it meets the required standards.
Export the model. - Export the trained and tested model for integration into the Android app

Comment 8.1

ID: 1291630 User: yibes Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Mon 30 Sep 2024 19:22 Selected Answer: - Upvotes: 1

change model to Retail(compact) for mobile apps

Comment 9

ID: 1220346 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:28 Selected Answer: - Upvotes: 3

Change the model domain
Retrain the model
Export the model

Comment 10

ID: 1220250 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:50 Selected Answer: - Upvotes: 2

1. Change the model domain
2. Retrain
3. Export

Comment 11

ID: 1218342 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 15:22 Selected Answer: - Upvotes: 1

1. Change the model domain
2. Retrain
3. Export

Comment 12

ID: 1213024 User: JamesKJoker Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 17 May 2024 21:20 Selected Answer: - Upvotes: 3

The Android App has Internet Connectivity, so there is no need for the Compact Version and also no need (and not even the option) for an Export. Therefore.
1. Change the Model
2. Retrain the Model
3. Test the Model.

Comment 13

ID: 1182401 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 11:43 Selected Answer: - Upvotes: 2

Final Answer:
Change the model domain {Retail(compact)}
Retrain the model
Export the model

Comment 14

ID: 936151 User: Pixelmate Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 07:20 Selected Answer: - Upvotes: 3

was on exam 28/06

Comment 15

ID: 839753 User: RAN_L Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Wed 15 Mar 2023 10:53 Selected Answer: - Upvotes: 6

Change the model domain: Since you trained the model using the Retail domain, you need to switch the domain to one that is optimized for mobile devices such as the General (compact) domain.

Retrain the model: After changing the domain, you need to retrain the model using the new domain settings.

Export the model: Once the model is retrained, you can export it in the format that is compatible with your Android app. The model can be exported as a TensorFlow or Core ML model for deployment on Android.

Comment 16

ID: 806130 User: NNU Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Feb 2023 10:04 Selected Answer: - Upvotes: 1

The model was trained, we must to test it, chage the model domain and export it.
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/test-your-model (How-to guides) the first is test, second change domain in Export models finaly export model

Comment 17

ID: 778441 User: KingChuang Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Tue 17 Jan 2023 02:17 Selected Answer: - Upvotes: 5

on my exam (2023-01-16 Passed)

My Answer:
1. Change the model domain
2. Retrain
3. Publish

18. AI-102 Topic 2 Question 2

Sequence
71
Discussion ID
74739
Source URL
https://www.examtopics.com/discussions/microsoft/view/74739-exam-ai-102-topic-2-question-2-discussion/
Posted By
SamedKia
Posted At
April 28, 2022, 9:44 a.m.

Question

You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code.
image
During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete.
You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Remove the Guid.Parse(operationId) parameter.
  • B. Add code to verify the results.Status value.
  • C. Add code to verify the status of the txtHeaders.Status value.
  • D. Wrap the call to GetReadResultAsync within a loop that contains a delay.

Suggested Answer

BD

Answer Description Click to expand


Community Answer Votes

Comments 10 comments Click to expand

Comment 1

ID: 620166 User: sdokmak Badges: Highly Voted Relative Date: 3 years, 8 months ago Absolute Date: Wed 22 Jun 2022 07:03 Selected Answer: BD Upvotes: 13

as per link in solution

Comment 1.1

ID: 620167 User: sdokmak Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Wed 22 Jun 2022 07:09 Selected Answer: - Upvotes: 5

and looking at what getReadAsync and getReadResultAsync methods return.
getReadResultAsync returns Observable<ReadOperationResult> object which contains as status() method.

Comment 1.1.1

ID: 620168 User: sdokmak Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Wed 22 Jun 2022 07:11 Selected Answer: - Upvotes: 4

getReadAsync doesn't have status method. Answer is B and D
https://docs.microsoft.com/en-us/dotnet/api/system.io.stream.readasync?view=net-6.0

Comment 2

ID: 936143 User: Pixelmate Badges: Highly Voted Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 07:17 Selected Answer: - Upvotes: 7

this appeared in the exam 28/06

Comment 3

ID: 1581525 User: azuri Badges: Most Recent Relative Date: 8 months, 2 weeks ago Absolute Date: Sun 29 Jun 2025 04:34 Selected Answer: BD Upvotes: 1

while (true)
{
var results = await client.GetReadResultAsync(operationId);

if (results.Status == OperationStatusCodes.Succeeded)
{
// Process the results
break;
}
else if (results.Status == OperationStatusCodes.Failed)
{
// Handle failure
break;
}

// Delay to avoid hammering the service with requests
await Task.Delay(1000);
}

Comment 4

ID: 1321895 User: vp_2299 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 04 Dec 2024 14:23 Selected Answer: BD Upvotes: 1

as per link in solution

Comment 5

ID: 1218339 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 15:15 Selected Answer: BD Upvotes: 2

results.Status
GetReadResultAsync

Comment 6

ID: 612709 User: PHD_CHENG Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Tue 07 Jun 2022 13:27 Selected Answer: - Upvotes: 4

was on exam 7 Jun 2022

Comment 7

ID: 593647 User: SamedKia Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Thu 28 Apr 2022 09:44 Selected Answer: - Upvotes: 3

C and D are the correct answers.

Comment 7.1

ID: 628306 User: ppo12 Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Thu 07 Jul 2022 11:50 Selected Answer: - Upvotes: 1

I don't think C is one of the answer based on https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ComputerVisionQuickstart.cs.

It seems results.Status is part of the while condition, hence I agree with dokmak's B and D

19. AI-102 Topic 10 Question 2

Sequence
72
Discussion ID
76783
Source URL
https://www.examtopics.com/discussions/microsoft/view/76783-exam-ai-102-topic-10-question-2-discussion/
Posted By
ANIKI51419
Posted At
June 14, 2022, 6:17 p.m.

Question

You are developing the knowledgebase.
You use Azure Video Analyzer for Media (previously Video indexer) to obtain transcripts of webinars.
You need to ensure that the solution meets the knowledgebase requirements.
What should you do?

  • A. Create a custom language model
  • B. Configure audio indexing for videos only
  • C. Enable multi-language detection for videos
  • D. Build a custom Person model for webinar presenters

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 14 comments Click to expand

Comment 1

ID: 984784 User: james2033 Badges: Highly Voted Relative Date: 2 years, 6 months ago Absolute Date: Fri 18 Aug 2023 23:18 Selected Answer: A Upvotes: 26

Keyword "jargon", so choose "A. Create a custom language model" .

Comment 2

ID: 1581539 User: bvs_reddy_4777 Badges: Most Recent Relative Date: 8 months, 2 weeks ago Absolute Date: Sun 29 Jun 2025 07:39 Selected Answer: A Upvotes: 1

A https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-language-model-overview

Comment 3

ID: 1267913 User: JuneRain Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 18 Aug 2024 04:56 Selected Answer: - Upvotes: 3

This question was in the test I took on August 2024

Comment 4

ID: 1230484 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 14:23 Selected Answer: A Upvotes: 1

Sorry, A is the correct answer.

Comment 5

ID: 1230483 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 14:22 Selected Answer: B Upvotes: 1

B is the correct answer.

Comment 6

ID: 1214419 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:33 Selected Answer: - Upvotes: 1

Is this question still available on May 21, 2024?

Comment 7

ID: 1190415 User: NullVoider_0 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 06 Apr 2024 15:28 Selected Answer: - Upvotes: 2

Question is incomplete and needs more context to determine the answer.

Comment 8

ID: 1145955 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 10 Feb 2024 07:50 Selected Answer: A Upvotes: 2

A. Create a custom language model: This option allows for the customization of the language model used for transcribing audio and video content. By creating a custom language model, you can train it to understand and transcribe specialized jargon and industry-specific terminology with high accuracy. This directly addresses the requirement to transcribe jargon with high accuracy and supports searches for equivalent terms by ensuring that the transcriptions are as accurate and relevant as possible.

Comment 9

ID: 663586 User: mk1967 Badges: - Relative Date: 3 years, 6 months ago Absolute Date: Thu 08 Sep 2022 14:13 Selected Answer: A Upvotes: 3

The requirement "Can transcribe jargon with high accuracy" wouldn't be met with the answer B.

Comment 10

ID: 642062 User: AjoseO Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Wed 03 Aug 2022 21:41 Selected Answer: A Upvotes: 3

--- Can transcribe jargon with high accuracy
Video Indexer (VI), the AI service for Azure Media Services enables the customization of language models by allowing customers to upload examples of sentences or words belonging to the vocabulary of their specific use case. Since speech recognition can sometimes be tricky, VI enables you to train and adapt the models for your specific domain. Harnessing this capability allows organizations to improve the accuracy of the Video Indexer generated transcriptions in their accounts.

https://azure.microsoft.com/en-us/blog/new-ways-to-train-custom-language-models-effortlessly/

Comment 11

ID: 636536 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 25 Jul 2022 09:50 Selected Answer: A Upvotes: 1

Leaning towards A on this.
https://azure.microsoft.com/en-us/blog/new-ways-to-train-custom-language-models-effortlessly/
https://docs.microsoft.com/en-us/azure/azure-video-indexer/video-indexer-overview

Comment 12

ID: 631386 User: AiEngineerS Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Thu 14 Jul 2022 15:10 Selected Answer: - Upvotes: 1

I think this is B... because even A makes sense... but u have to extract this information from video somehow. So B

Comment 13

ID: 616303 User: ANIKI51419 Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Tue 14 Jun 2022 18:17 Selected Answer: - Upvotes: 3

should be A

Comment 13.1

ID: 620065 User: sdokmak Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Wed 22 Jun 2022 01:44 Selected Answer: - Upvotes: 2

I think you're right
https://azure.microsoft.com/en-us/blog/new-ways-to-train-custom-language-models-effortlessly/

20. AI-102 Topic 4 Question 25

Sequence
76
Discussion ID
150067
Source URL
https://www.examtopics.com/discussions/microsoft/view/150067-exam-ai-102-topic-4-question-25-discussion/
Posted By
a8da4af
Posted At
Oct. 22, 2024, 9:10 p.m.

Question

HOTSPOT
-

You have an Azure subscription.

You plan to build a solution that will analyze scanned documents and export relevant fields to a database.

You need to recommend which Azure AI service to deploy for the following types of documents:

• Internal expenditure request authorization forms
• Supplier invoices

The solution must minimize development effort.

What should you recommend for each document type? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 2 comments Click to expand

Comment 1

ID: 1301682 User: a8da4af Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 22 Oct 2024 21:10 Selected Answer: - Upvotes: 7

Correct answer, verified with ChatGPT

Internal expenditure request authorization forms
These forms are likely to be custom and specific to your organization, so a pre-built model might not accurately extract the fields you need.
Recommendation: Azure AI Document Intelligence custom model

Supplier invoices
Invoices are common and well-understood document types that Azure already provides pre-built models for.
Recommendation: Azure AI Document Intelligence pre-built model

Comment 2

ID: 1579084 User: StelSen Badges: Most Recent Relative Date: 8 months, 3 weeks ago Absolute Date: Fri 20 Jun 2025 08:42 Selected Answer: - Upvotes: 1

1) Azure AI Document Intelligence custom model
2) Azure AI Document Intelligence pre-built model

21. AI-102 Topic 2 Question 6

Sequence
80
Discussion ID
82128
Source URL
https://www.examtopics.com/discussions/microsoft/view/82128-exam-ai-102-topic-2-question-6-discussion/
Posted By
Internal_Koala
Posted At
Sept. 14, 2022, 12:31 p.m.

Question

HOTSPOT -
You are developing an application to recognize employees' faces by using the Face Recognition API. Images of the faces will be accessible from a URI endpoint.
The application has the following code.
image
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/use-persondirectory

Comments 26 comments Click to expand

Comment 1

ID: 668889 User: Internal_Koala Badges: Highly Voted Relative Date: 3 years, 5 months ago Absolute Date: Wed 14 Sep 2022 12:31 Selected Answer: - Upvotes: 21

Based on the subscription, I think, it could also be
Yes
Yes
Yes

"Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons.
S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons."

https://docs.microsoft.com/en-us/rest/api/faceapi/person-group/create?tabs=HTTP

Comment 1.1

ID: 984808 User: james2033 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Sat 19 Aug 2023 00:11 Selected Answer: - Upvotes: 3

person groups, not persons. 2 choose No.

Comment 1.2

ID: 805825 User: AzureJobsTillRetire Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Feb 2023 02:14 Selected Answer: - Upvotes: 11

The second box should be No. The given answers are correct. The second box states that the code will work for up to 10,000 people. While this is true for S0 tier, it is false for free-tier. Since the price tier is not given, we will have to say that it is not always true, and that means it is false

Comment 1.2.1

ID: 807265 User: surasahoo Badges: - Relative Date: 3 years ago Absolute Date: Mon 13 Feb 2023 10:58 Selected Answer: - Upvotes: 4

Hi, have you passed the exam? Did you simulation questions?

Comment 1.2.1.1

ID: 808897 User: Adobe02 Badges: - Relative Date: 3 years ago Absolute Date: Tue 14 Feb 2023 23:09 Selected Answer: - Upvotes: 1

Following

Comment 1.2.2

ID: 1270933 User: Ody Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 22 Aug 2024 22:32 Selected Answer: - Upvotes: 1

The question isn't really about the SKU, but asking if the "code will work" for that many. The code will.

Comment 1.3

ID: 890098 User: Rob77 Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Fri 05 May 2023 16:41 Selected Answer: - Upvotes: 10

2nd is "no". Nothing is stopping you from specifying another group using the code so even free tier is 1000x1000 = 1m people

Comment 1.4

ID: 1232407 User: baliuxas07 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 18 Jun 2024 13:56 Selected Answer: - Upvotes: 1

404 on the source

Comment 2

ID: 918728 User: ziggy1117 Badges: Highly Voted Relative Date: 2 years, 9 months ago Absolute Date: Fri 09 Jun 2023 00:50 Selected Answer: - Upvotes: 13

Yes
No
Yes

"Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons. So in the code you can have 1000 person groups and 1000 persons each giving you 1,000,000 people

Comment 3

ID: 1578303 User: StelSen Badges: Most Recent Relative Date: 8 months, 4 weeks ago Absolute Date: Tue 17 Jun 2025 14:15 Selected Answer: - Upvotes: 1

1. “The code will add a face image to a person object in a person group.” → Yes
The endpoint .../persongroups/{person_group_id}/persons/{person_id}/persistedFaces is used specifically for adding a face to a person object in a person group.

2. “The code will work for up to 10,000 people.” → Yes
Azure Face API allows up to 1,000,000 faces per subscription, and up to 1,000 persons per person group, but 10,000 persons is allowed per subscription when using LargePersonGroup, which is the newer standard.
So yes, the system as designed in general can scale to 10,000 people under the appropriate configuration.

3. “add_face can be called multiple times to add multiple face images to a person object.” → Yes

Azure Face API supports multiple face images per person to improve accuracy during training and identification.
Each call adds a new persistedFaceId

Comment 4

ID: 1286124 User: master9 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Thu 19 Sep 2024 06:05 Selected Answer: - Upvotes: 1

ChatGPT Response:

1. The code will add a face image to a person object in a person group:
Yes: In the Face API, adding a face to a person object in a person group is a typical operation. You use the add_face method (or similar method depending on the SDK or API) to associate a face with a person object within a specific person group.
2. The code will work for up to 10,000 people:
Yes: Azure Face API can typically support up to 10,000 people in a person group. This is a documented limit for the Face API's person groups, so the code can handle this number of people.
3. add_face can be called multiple times to add multiple face images to a person object:
Yes: You can call the add_face method multiple times to associate multiple face images with a single person object. This helps improve the accuracy of face recognition by allowing the system to recognize the same person from different angles or lighting conditions.

Comment 5

ID: 1276492 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Mon 02 Sep 2024 08:36 Selected Answer: - Upvotes: 1

Another of those ambigous guessing game from the dysfunctional microsoft. I will answer the second one No. Just assuming the question creator is asking whether one person group can have 10K persons. Otherwise I hope he/she would have given the plan.
I do not understand the point of all these questions and remembering these useless facts.

Comment 5.1

ID: 1276493 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Mon 02 Sep 2024 08:37 Selected Answer: - Upvotes: 1

Changed, I will go for yes, and assuming standard plan.

Comment 6

ID: 1256367 User: Labani1987 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Sat 27 Jul 2024 18:59 Selected Answer: - Upvotes: 1

Second should be yes .Link => https://learn.microsoft.com/en-us/rest/api/face/person-group-operations/create-person-group?view=rest-face-v1.1-preview.1&tabs=HTTP.. It will be supported by Free tier subscription quota.

Comment 7

ID: 1248470 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 15 Jul 2024 19:20 Selected Answer: - Upvotes: 1

Yes
No
Yes

Comment 8

ID: 1220338 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:23 Selected Answer: - Upvotes: 2

The correct sequence for this problem solution is Yes No Yes.

Comment 9

ID: 1218329 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 14:54 Selected Answer: - Upvotes: 2

Yes
No
Yes

Comment 10

ID: 1214387 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:10 Selected Answer: - Upvotes: 1

Yes No Yes

Comment 11

ID: 1185364 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 29 Mar 2024 11:54 Selected Answer: - Upvotes: 1

The code you’ve provided is intended to add a face image to a person object in a person group using the Azure Face API, so:

Yes, the code will add a face image to a person object in a person group, provided the code is corrected for syntax errors and proper API usage.
Yes, the code can work for up to 10,000 people, as long as the Azure Face API limits are adhered to and the appropriate subscription tier is used.
Yes, the add_face function can be called multiple times to add multiple face images to a person object, subject to the limits imposed by the Azure Face API.

Comment 12

ID: 1179796 User: Ody Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 03:38 Selected Answer: - Upvotes: 1

Images sent to the service are not stored after analysis.

https://learn.microsoft.com/en-us/legal/cognitive-services/face/data-privacy-security

Comment 13

ID: 1158082 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Sat 24 Feb 2024 18:59 Selected Answer: - Upvotes: 1

Is this a trick question?
From here: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
No image will be stored. Only the extracted face feature(s) will be stored on server until PersonGroup PersonFace - Delete, PersonGroup Person - Delete or PersonGroup - Delete is called.

Comment 14

ID: 1116733 User: davidorti Badges: - Relative Date: 2 years, 2 months ago Absolute Date: Mon 08 Jan 2024 16:41 Selected Answer: - Upvotes: 2

Yes
No - As the code will keep working in other groups (for instance), and as AzureJobsTillRetire says, a statement that's not generally true is false
No - As the image is not really 'added', just their features

"No image will be stored. Only the extracted face feature will be stored on server until PersonGroup PersonFace - Delete, PersonGroup Person - Delete or PersonGroup - Delete is called." --see: https://learn.microsoft.com/en-us/rest/api/faceapi/person-group/create?view=rest-faceapi-v1.0&tabs=HTTP

The way it works, you have to update a face https://learn.microsoft.com/en-us/rest/api/faceapi/person-group-person/update-face?view=rest-faceapi-v1.0&tabs=HTTP. If you register a new pic for an existing user it will just create a new one and return a new persistedFaceId.

Comment 14.1

ID: 1179800 User: Ody Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 03:41 Selected Answer: - Upvotes: 1

If that's your reasoning, then you can't select Yes for the first one. They use the same verbiage in both.

Comment 15

ID: 1049639 User: sl_mslconsulting Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Sat 21 Oct 2023 18:07 Selected Answer: - Upvotes: 1

You can vary the person group id and the person id while making the call, so even with the free tier is still way above the mention 10,000 people limitation. The only limitation that is defined for this API is this Response 403
Persisted face number reached limit, maximum is 248 per person.
Link here: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b

Comment 15.1

ID: 1049643 User: sl_mslconsulting Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Sat 21 Oct 2023 18:11 Selected Answer: - Upvotes: 1

The 10,000 lomitation would make more sense if they are asking about this API: POST {Endpoint}/face/v1.0/persongroups/{personGroupId}/persons

Comment 16

ID: 669058 User: momentumhd Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Wed 14 Sep 2022 15:37 Selected Answer: - Upvotes: 4

Once you have the Person ID from the Create Person call, you can add up to 248 face images to a Person per recognition model.
They are all true, the limit is 75 milion persons per group

22. AI-102 Topic 2 Question 11

Sequence
81
Discussion ID
75223
Source URL
https://www.examtopics.com/discussions/microsoft/view/75223-exam-ai-102-topic-2-question-11-discussion/
Posted By
PHD_CHENG
Posted At
May 6, 2022, 5:19 a.m.

Question

You use the Custom Vision service to build a classifier.
After training is complete, you need to evaluate the classifier.
Which two metrics are available for review? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

  • A. recall
  • B. F-score
  • C. weighted accuracy
  • D. precision
  • E. area under the curve (AUC)

Suggested Answer

AD

Answer Description Click to expand


Community Answer Votes

Comments 10 comments Click to expand

Comment 1

ID: 1578311 User: StelSen Badges: - Relative Date: 8 months, 4 weeks ago Absolute Date: Tue 17 Jun 2025 14:39 Selected Answer: AD Upvotes: 1

A. recall
D. precision

Comment 2

ID: 1235178 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:17 Selected Answer: AD Upvotes: 1

I say this answer is A and D.

Comment 3

ID: 1229318 User: anto69 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 12 Jun 2024 18:35 Selected Answer: AD Upvotes: 1

Chat GPT and me: A + D

Comment 4

ID: 1220244 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:44 Selected Answer: - Upvotes: 1

A and D are right answer.

Comment 5

ID: 1152255 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:29 Selected Answer: AD Upvotes: 2

Zellck provided an excellent answer with precise documentation and interpretation of precision and recall metrics. Precision refers to the positive predictive value - the proportion of true positive results among all positively identified outcomes. Meanwhile, recall represents sensitivity - the proportion of actual positive cases that are correctly identified as such.

Precision and recall form a fundamental pair of performance indicators that entail an inherent trade-off. As one metric is optimized, the other typically suffers as a consequence. Specifically, as the precision rate increases, the recall rate often correspondingly decreases. The optimal balance between precision and recall depends on the business context and specific needs of the use case. By clearly explaining the definitions and relationship between these two metrics, Zellck thoroughly addressed the concepts with clarity and accuracy.

Comment 6

ID: 1056565 User: trashbox Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Sun 29 Oct 2023 05:08 Selected Answer: - Upvotes: 4

Appeared on Oct/29/2023.

Comment 7

ID: 742592 User: halfway Badges: - Relative Date: 3 years, 3 months ago Absolute Date: Mon 12 Dec 2022 10:14 Selected Answer: AD Upvotes: 1

Precision and Recall: https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier#evaluate-the-classifier

Comment 8

ID: 632989 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 13:34 Selected Answer: AD Upvotes: 2

A and D are correct answers - as per PHD_CHENG
https://docs.microsoft.com/en-us/learn/modules/cv-classify-bird-species/4-understand-results-test

Comment 9

ID: 612712 User: PHD_CHENG Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Tue 07 Jun 2022 13:28 Selected Answer: - Upvotes: 1

Was on exam 7 Jun 2022

Comment 10

ID: 597523 User: PHD_CHENG Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Fri 06 May 2022 05:19 Selected Answer: AD Upvotes: 4

Answer is correct. You can find the metrics from Microsoft link https://docs.microsoft.com/en-us/learn/modules/cv-classify-bird-species/4-understand-results-test

23. AI-102 Topic 3 Question 2

Sequence
83
Discussion ID
56451
Source URL
https://www.examtopics.com/discussions/microsoft/view/56451-exam-ai-102-topic-3-question-2-discussion/
Posted By
azurelearner666
Posted At
June 30, 2021, 7:07 p.m.

Question

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You develop an application to identify species of flowers by training a Custom Vision model.
You receive images of new flower species.
You need to add the new images to the classifier.
Solution: You add the new images, and then use the Smart Labeler tool.
Does this meet the goal?

  • A. Yes
  • B. No

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 12 comments Click to expand

Comment 1

ID: 397708 User: TheB Badges: Highly Voted Relative Date: 4 years, 8 months ago Absolute Date: Sat 03 Jul 2021 17:46 Selected Answer: - Upvotes: 15

The answer is correct.

Comment 2

ID: 1040056 User: sl_mslconsulting Badges: Highly Voted Relative Date: 2 years, 5 months ago Absolute Date: Wed 11 Oct 2023 03:49 Selected Answer: B Upvotes: 12

The answer is B is because the limitations of the smart labeler: You should only request suggested tags for images whose tags have already been trained on once. Don't get suggestions for a new tag that you're just beginning to train. You are given new images of species that have not been seen by the model how can you expect it to suggest what they are? Also you can train the model right in the smart labeler: check the workflow and the limitations in the doc. https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/suggested-tags

Comment 3

ID: 1576689 User: azuretrainer1 Badges: Most Recent Relative Date: 9 months ago Absolute Date: Thu 12 Jun 2025 01:31 Selected Answer: A Upvotes: 2

Yes, this meets the goal.

Explanation:
When you're using Custom Vision to build a classifier and receive new images, you must do two things:

Add the new images to your project.

Label the images appropriately so the model can learn from them.

What is the Smart Labeler?
The Smart Labeler is a tool in Custom Vision that automatically suggests tags for unlabeled images based on your existing trained model.

It helps you quickly label new images by leveraging what the model already knows.

Why this solution works:
By adding the new images and using the Smart Labeler to tag them, you're following the proper process to update and improve your classifier with new data.

After labeling, you'd typically retrain the model to incorporate the new examples.

✅ So yes, this solution does meet the goal for updating a Custom Vision classifier with new flower species.

Comment 4

ID: 1346665 User: f250399 Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Sat 25 Jan 2025 22:36 Selected Answer: A Upvotes: 2

Yes, your solution meets the goal. Using the Smart Labeler tool in Custom Vision allows you to add new images to your classifier efficiently. The Smart Labeler tool helps to automate the labeling process by suggesting appropriate tags for new images based on existing data, thereby speeding up the process of integrating new flower species into your model.

Comment 5

ID: 1313588 User: 9cc71b6 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 17 Nov 2024 15:53 Selected Answer: B Upvotes: 1

Answer is B

Comment 6

ID: 984555 User: james2033 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Fri 18 Aug 2023 15:58 Selected Answer: B Upvotes: 1

Smart Labeler for suggestion https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/suggested-tags

Comment 7

ID: 898081 User: hens Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Mon 15 May 2023 08:24 Selected Answer: - Upvotes: 2

chat gpt "
ChatGPT
A. Yes.

Using the Smart Labeler tool is a valid way to train a Custom Vision model with new images. It allows the user to label images more efficiently by using an active learning approach that selects images that will have the highest impact on the model's performance."

Comment 8

ID: 643780 User: ExamGuruBhai Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Sun 07 Aug 2022 18:53 Selected Answer: - Upvotes: 1

retrain model so answer is B

Comment 9

ID: 633029 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 15:00 Selected Answer: B Upvotes: 4

B is correct answer : No.

Instead the model need to be extended and retrained (Udemy answer).

Comment 10

ID: 558742 User: hendriktytgatpwc Badges: - Relative Date: 4 years ago Absolute Date: Tue 01 Mar 2022 14:02 Selected Answer: - Upvotes: 1

answer is correct

Comment 11

ID: 517850 User: sumanshu Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Thu 06 Jan 2022 00:00 Selected Answer: - Upvotes: 2

Label + Retrain

Comment 12

ID: 394985 User: azurelearner666 Badges: - Relative Date: 4 years, 8 months ago Absolute Date: Wed 30 Jun 2021 19:07 Selected Answer: - Upvotes: 5

correct! retraining is necesary!

24. AI-102 Topic 1 Question 4

Sequence
89
Discussion ID
76962
Source URL
https://www.examtopics.com/discussions/microsoft/view/76962-exam-ai-102-topic-1-question-4-discussion/
Posted By
booofin
Posted At
June 20, 2022, 7:57 p.m.

Question

Your company wants to reduce how long it takes for employees to log receipts in expense reports. All the receipts are in English.
You need to extract top-level information from the receipts, such as the vendor and the transaction total. The solution must minimize development effort.
Which Azure service should you use?

  • A. Custom Vision
  • B. Personalizer
  • C. Form Recognizer
  • D. Computer Vision

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 11 comments Click to expand

Comment 1

ID: 1572637 User: man5484 Badges: - Relative Date: 9 months, 2 weeks ago Absolute Date: Tue 27 May 2025 11:51 Selected Answer: C Upvotes: 1

C. Form Recognizer
Has a prebuilt receipt model for English receipts.
Can extract: vendor, total, date, items, tax, etc.
Super low-code — literally just upload a PDF or image and hit the API.
You can even customize it if your receipts are non-standard.
— minimal dev effort, high accuracy, designed for receipts.

Comment 2

ID: 1195215 User: CDL_Learner Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 14 Apr 2024 04:00 Selected Answer: C Upvotes: 2

Form Recognizer is an AI-powered document extraction service that understands your forms, enabling you to extract text, tables, and key-value pairs from your documents, whether print or handwritten. It’s designed to recognize and extract information from receipts, among other types of documents, which makes it the best choice for this scenario. It can easily extract top-level information such as the vendor and transaction total from receipts, thereby reducing the time employees spend on logging receipts in expense reports. This solution also minimizes development effort as it provides pre-built models for common extraction tasks.

Comment 2.1

ID: 1195219 User: CDL_Learner Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 14 Apr 2024 04:02 Selected Answer: - Upvotes: 2

the other options are not suitable:
• A. Custom Vision: This service is used to build custom image classification models. It could be used to recognize receipts, but it wouldn’t extract the detailed information from them like Form Recognizer can.
• B. Personalizer: This is an AI service that delivers personalized user experiences. It’s not designed for processing receipts or extracting information from documents.
• D. Computer Vision: This service can analyze visual features in images, but it’s not specialized for extracting structured data from forms or receipts. It would require significant additional development to extract specific fields from a receipt compared to Form Recognizer.

Comment 3

ID: 1191333 User: michaelmorar Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 08 Apr 2024 05:43 Selected Answer: C Upvotes: 1

Form recogniser supports among other common document types, receipts and invoices.

Comment 4

ID: 1147002 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sun 11 Feb 2024 07:00 Selected Answer: C Upvotes: 2

no doubt using Form recognizer to locate the content to be OCR

Comment 5

ID: 1069191 User: ccampagna Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Mon 13 Nov 2023 10:54 Selected Answer: - Upvotes: 4

The correct answer is C "Form Recognizer". This service had changed its name to Document Intelligence a few weeks ago

Comment 6

ID: 950970 User: james2033 Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Thu 13 Jul 2023 22:07 Selected Answer: C Upvotes: 2

See example of Invoice with Azure Form Recognizer at https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/overview?view=form-recog-3.0.0#invoice

Comment 7

ID: 749999 User: kml2003 Badges: - Relative Date: 3 years, 2 months ago Absolute Date: Mon 19 Dec 2022 16:43 Selected Answer: C Upvotes: 2

Form recognizer

Comment 8

ID: 673180 User: NK0709 Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Mon 19 Sep 2022 12:33 Selected Answer: - Upvotes: 1

C. Form Recognizer

Comment 9

ID: 627550 User: Eltooth Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Tue 05 Jul 2022 19:02 Selected Answer: C Upvotes: 2

C is correct answer.

Comment 10

ID: 619410 User: booofin Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Mon 20 Jun 2022 19:57 Selected Answer: - Upvotes: 1

Use Form recognizer

25. AI-102 Topic 1 Question 9

Sequence
90
Discussion ID
57153
Source URL
https://www.examtopics.com/discussions/microsoft/view/57153-exam-ai-102-topic-1-question-9-discussion/
Posted By
Francesco1985
Posted At
July 5, 2021, 10:53 a.m.

Question

You have the following C# method for creating Azure Cognitive Services resources programmatically.
image
You need to call the method to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically.
Which code should you use?

  • A. create_resource(client, "res1", "ComputerVision", "F0", "westus")
  • B. create_resource(client, "res1", "CustomVision.Prediction", "F0", "westus")
  • C. create_resource(client, "res1", "ComputerVision", "S0", "westus")
  • D. create_resource(client, "res1", "CustomVision.Prediction", "S0", "westus")

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 19 comments Click to expand

Comment 1

ID: 424277 User: dinesh_tng Badges: Highly Voted Relative Date: 4 years, 7 months ago Absolute Date: Fri 13 Aug 2021 14:10 Selected Answer: - Upvotes: 70

Answer shall be A, as there is free tier available for Computer Vision service.
- Free - Web/Container
- 20 per minute
- 5,000 free transactions per month

Comment 1.1

ID: 516136 User: sumanshu Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Tue 04 Jan 2022 00:36 Selected Answer: - Upvotes: 1

But what Feature ? It's not mention in Pricing Tier. It could be normal Computer Vision i.e. Boundary detection etc

Comment 1.2

ID: 887714 User: Rob77 Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Tue 02 May 2023 19:04 Selected Answer: - Upvotes: 1

"Caption" is in preview and seems to be S3 tier so not free - see this
https://azure.microsoft.com/en-gb/pricing/details/cognitive-services/computer-vision/

Comment 2

ID: 1572640 User: man5484 Badges: Most Recent Relative Date: 9 months, 2 weeks ago Absolute Date: Tue 27 May 2025 12:17 Selected Answer: A Upvotes: 1

A is correct

Comment 3

ID: 1271057 User: testmaillo020 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 23 Aug 2024 06:23 Selected Answer: - Upvotes: 3

You are correct in pointing out that the requirement specifies the resource will be used to generate captions of images automatically. The correct service for generating captions (describing the content of images) is **Computer Vision** rather than **Custom Vision**.

**Computer Vision** is specifically designed for tasks like image analysis, including generating image descriptions (captions). On the other hand, **Custom Vision** is used for training and deploying custom image classifiers.

So, the correct choice remains:

**A. `create_resource(client, "res1", "ComputerVision", "F0", "westus")`**

This option correctly uses the **ComputerVision** service in the **F0** (free) tier, which is suitable for generating captions automatically in the West US region.

Comment 4

ID: 1264422 User: shanakrs Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 12 Aug 2024 04:37 Selected Answer: - Upvotes: 1

Answer is "A"

https://azure.microsoft.com/en-us/pricing/details/cognitive-services/computer-vision/

Image Analysis
Instance
- Free (F0) - Web/Container
Features
- All
Price
- 5,000 free transactions per month
- 20 transactions per minute

Comment 5

ID: 1235859 User: WhatIsItAbout Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 17:04 Selected Answer: A Upvotes: 4

The ComputerVision resource is more commonly used because it comes with pre-built capabilities for image analysis, including caption generation, without the need to train a custom model. On the other hand, the CustomVision.Prediction you must be specifically trained to support generating captions.

Comment 6

ID: 1234339 User: GoldBear Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 12:34 Selected Answer: A Upvotes: 1

The free version of Computer Vision.
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/limits-and-quotas

Comment 7

ID: 1211990 User: harnoor24 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 15 May 2024 16:45 Selected Answer: A Upvotes: 1

Computer Vision (Also Called AI Vision now) has free tier

Comment 8

ID: 1211319 User: JamesKJoker Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Tue 14 May 2024 11:44 Selected Answer: A Upvotes: 1

Generating Caption is not Possible with Custom Vision

Comment 9

ID: 1209952 User: demonite Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sat 11 May 2024 22:06 Selected Answer: - Upvotes: 1

Answer is A
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-describe-images-40?tabs=image

Comment 10

ID: 1203177 User: AzureGC Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sat 27 Apr 2024 17:16 Selected Answer: - Upvotes: 2

Correct Answer is B;

* While the discussion below is about the FREE v. for-fee tiers, the answer key phrase is "captions" for images; To generate captions, one must use the CustomVisionPredictionClient. ComputerVision generates descriptions, !captions! See the services options.

https://learn.microsoft.com/en-us/azure/ai-services/multi-service-resource?tabs=windows&pivots=programming-language-csharp#choose-a-service-and-pricing-tier

Comment 11

ID: 1152230 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 22:26 Selected Answer: A Upvotes: 1

There is Free tier available for Computer Vision

Comment 12

ID: 1147012 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sun 11 Feb 2024 07:23 Selected Answer: A Upvotes: 2

Functionality Focus: The task of generating captions for images automatically aligns more closely with the features offered by the Computer Vision API, specifically designed to analyze images and provide information about them, including descriptive captions. This functionality is not a focus of the Custom Vision service, which is more about applying custom models to new images for classification or object detection purposes.

Comment 13

ID: 1130565 User: suzanne_exam Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Wed 24 Jan 2024 13:12 Selected Answer: - Upvotes: 3

I don't know why the answer says there is no free tier for computer vision. if you go to the portal and create a new resource the F0 option is there

Comment 14

ID: 984225 User: james2033 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Fri 18 Aug 2023 08:35 Selected Answer: A Upvotes: 2

See sample source code at here
https://learn.microsoft.com/en-us/azure/ai-services/multi-service-resource?pivots=programming-language-csharp&tabs=windows#call-management-methods

Choose where has F0, then choose where has Computer Vision, Computer Vision for inference caption of image.

Comment 15

ID: 920432 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sun 11 Jun 2023 07:26 Selected Answer: A Upvotes: 2

Computer Vision resource will be used to generate captions of images automatically.

Comment 16

ID: 916992 User: EliteAllen Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Wed 07 Jun 2023 09:53 Selected Answer: A Upvotes: 2

There are two pricing tier for Computer Vision, F0 & S1 , so answer is A.

Comment 17

ID: 875292 User: mgafar Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Thu 20 Apr 2023 07:13 Selected Answer: - Upvotes: 1

You should use the following code:

A. create_resource(client, "res1", "ComputerVision", "F0", "westus")

This code creates a free ("F0" tier) Azure Cognitive Services resource for Computer Vision in the West US Azure region. The resource will be used to generate captions of images automatically, which is a feature of the Computer Vision API.

26. AI-102 Topic 2 Question 28

Sequence
92
Discussion ID
107293
Source URL
https://www.examtopics.com/discussions/microsoft/view/107293-exam-ai-102-topic-2-question-28-discussion/
Posted By
MaliSanFuu
Posted At
April 24, 2023, 12:20 p.m.

Question

You plan to build an app that will generate a list of tags for uploaded images. The app must meet the following requirements:

• Generate tags in a user's preferred language.
• Support English, French, and Spanish.
• Minimize development effort.

You need to build a function that will generate the tags for the app.

Which Azure service endpoint should you use?

  • A. Content Moderator Image Moderation
  • B. Custom Vision image classification
  • C. Computer Vision Image Analysis
  • D. Custom Translator

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 18 comments Click to expand

Comment 1

ID: 879223 User: MaliSanFuu Badges: Highly Voted Relative Date: 2 years, 10 months ago Absolute Date: Mon 24 Apr 2023 12:20 Selected Answer: C Upvotes: 20

I think the answer should be C, because of the minimized developement effort. Since the prebuilt model of C also fits the other two requirements, so there is no need to train a custom model.

source: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/call-analyze-image?tabs=rest

Comment 2

ID: 1132968 User: evangelist Badges: Highly Voted Relative Date: 2 years, 1 month ago Absolute Date: Sat 27 Jan 2024 02:03 Selected Answer: C Upvotes: 8

Here's why:

Multilingual Tag Generation: Azure's Computer Vision Image Analysis service can analyze images and provide a list of tags describing the content of the images. It also has the capability to return these tags in various languages, including English, French, and Spanish, which aligns with your requirement.

Minimizing Development Effort: This service offers a pre-built model, which means there is no need for you to collect data and train your own model. This significantly reduces the development effort and time. You simply need to call the API with your images, and it will return the tags.

Comment 3

ID: 1572607 User: saravananponnaiah Badges: Most Recent Relative Date: 9 months, 2 weeks ago Absolute Date: Tue 27 May 2025 09:15 Selected Answer: B Upvotes: 1

B is the correct answer. As per documentation, Computer Vision Image Analysis currently support only English.

Source: https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0
Search for text - "Currently, English is the only supported language for tagging and categorizing images."

So, for multi-language support, we will have to use custom vision image classification.

Comment 4

ID: 1235336 User: reiwanotora Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 12:51 Selected Answer: C Upvotes: 1

C is the answer.

Comment 5

ID: 1235169 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:13 Selected Answer: - Upvotes: 1

Why C?

Comment 6

ID: 1221061 User: fuck_india Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 29 May 2024 16:52 Selected Answer: C Upvotes: 1

C is correct.

Comment 7

ID: 1218287 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:57 Selected Answer: C Upvotes: 1

C is right.

Comment 8

ID: 1214372 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 15:58 Selected Answer: C Upvotes: 1

C. Computer Vision Image Analysis

Comment 9

ID: 1152266 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:45 Selected Answer: - Upvotes: 3

generate tags as per image is an Image Analysis operation rather than an image classification operation

Comment 10

ID: 1108962 User: ankitdhir Badges: - Relative Date: 2 years, 2 months ago Absolute Date: Fri 29 Dec 2023 18:49 Selected Answer: C Upvotes: 2

Verified with google bard

Comment 10.1

ID: 1139829 User: anto69 Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sun 04 Feb 2024 06:58 Selected Answer: - Upvotes: 1

C, verified with Copilot too

Comment 11

ID: 1105133 User: TRUESON Badges: - Relative Date: 2 years, 2 months ago Absolute Date: Mon 25 Dec 2023 10:31 Selected Answer: - Upvotes: 2

Selected Answer: B
French is not supported by computer vision

Comment 11.1

ID: 1322139 User: 3fbc31b Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 05 Dec 2024 00:21 Selected Answer: - Upvotes: 1

Yes it is. URL below.
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/language-support#image-analysis

Comment 11.2

ID: 1126627 User: josebernabeo Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Fri 19 Jan 2024 12:18 Selected Answer: - Upvotes: 1

True.

URL parameter Value Description
language en English
language es Spanish
language ja Japanese
language pt Portuguese
language zh Simplified Chinese

Source: https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image?tabs=rest

Comment 11.3

ID: 1167717 User: Mehe323 Badges: - Relative Date: 2 years ago Absolute Date: Thu 07 Mar 2024 07:10 Selected Answer: - Upvotes: 2

French is supported, I just tested it in Azure. See also this link (the table 'Analyze image':
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/language-support#image-analysis

Comment 12

ID: 935873 User: Pixelmate Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 00:24 Selected Answer: - Upvotes: 4

Asked in 28/06/2023 exam

Comment 13

ID: 918740 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Fri 09 Jun 2023 01:19 Selected Answer: - Upvotes: 1

C: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-tagging-images

Comment 14

ID: 898073 User: hens Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Mon 15 May 2023 08:08 Selected Answer: - Upvotes: 1

chat gpt "To generate a list of tags for uploaded images in multiple languages, you should use the Computer Vision Image Analysis service endpoint in Azure.

Computer Vision provides a pre-built model that can generate image tags based on a given image. It also supports multiple languages, including English, French, and Spanish, which meets the requirements of the app. Additionally, the service provides a REST API, which can be easily integrated into your app without requiring significant development effort."

27. AI-102 Topic 4 Question 32

Sequence
95
Discussion ID
150444
Source URL
https://www.examtopics.com/discussions/microsoft/view/150444-exam-ai-102-topic-4-question-32-discussion/
Posted By
Christian_garcia_martin
Posted At
Oct. 29, 2024, 6:17 a.m.

Question

DRAG DROP
-

You have an Azure subscription that contains an Azure AI Document Intelligence resource named DI1 and a storage account named sa1. The sa1 account contains a blob container named blob1 and an Azure Files share named share1.

You plan to build a custom model named Model1 in DI1.

You create sample forms and JSON files for Model1.

You need to train Model1 and retrieve the ID of the model.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.

image

Suggested Answer

image
Answer Description Click to expand


Comments 4 comments Click to expand

Comment 1

ID: 1304298 User: Christian_garcia_martin Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 06:17 Selected Answer: - Upvotes: 19

Microsoft copilot:

Upload the forms and JSON files to blob1: Ensure that your sample forms and JSON files are uploaded to the blob container named blob1 in the storage account sa1.

Create a shared access signature (SAS) URL for blob1: Generate a SAS URL for the blob container to provide secure access to the files.

Call the Build model REST API function: Use the SAS URL to call the Build model REST API function in DI1 to start the training process.

Call the Get model REST API function: After the training is complete, call the Get model REST API function to retrieve the ID of Model1.

Comment 1.1

ID: 1312295 User: Alan_CA Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 14 Nov 2024 20:26 Selected Answer: - Upvotes: 3

Looks like you're right :
https://learn.microsoft.com/en-us/rest/api/aiservices/document-models/build-model?view=rest-aiservices-v4.0%20(2024-07-31-preview)&tabs=HTTP#tabpanel_1_HTTP

Comment 2

ID: 1565435 User: marcellov Badges: Most Recent Relative Date: 10 months, 2 weeks ago Absolute Date: Thu 01 May 2025 14:46 Selected Answer: - Upvotes: 3

ChatGPT:

1. Upload the forms and JSON files to blob1
Azure AI Document Intelligence only supports blob containers (not Azure Files shares) for custom model training data. Store your sample forms and JSON label files in blob1.

2. Create a shared access signature (SAS) URL for blob1
Generate a read/list SAS token for the blob container to grant Document Intelligence temporary access to your training data. Use the Azure portal, Azure CLI, or Storage Explorer to create this SAS URL.

3. Call the Build model REST API function

4. Call the Get model REST API function
After training completes (typically minutes to hours), retrieve Model1's ID and metadata.

Why Not These Actions?
Retrieve the access key for sa1: Unnecessary for SAS-based access.
Upload to share1: Azure Files shares are not supported for Document Intelligence training.
Get info REST API: Generic resource info call, not model-specific.

Comment 3

ID: 1317105 User: nastolgia Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 24 Nov 2024 16:42 Selected Answer: - Upvotes: 2

We can connect to Azure Blob Storage using either an access key or a SAS token.

28. AI-102 Topic 6 Question 1

Sequence
96
Discussion ID
81465
Source URL
https://www.examtopics.com/discussions/microsoft/view/81465-exam-ai-102-topic-6-question-1-discussion/
Posted By
Sharks82
Posted At
Sept. 10, 2022, 12:37 a.m.

Question

DRAG DROP -
You are developing the smart e-commerce project.
You need to design the skillset to include the contents of PDFs in searches.
How should you complete the skillset design diagram? To answer, drag the appropriate services to the correct stages. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand

Box 1: Azure Blob storage -
At the start of the pipeline, you have unstructured text or non-text content (such as images, scanned documents, or JPEG files). Data must exist in an Azure data storage service that can be accessed by an indexer.

Box 2: Computer Vision API -
Scenario: Provide users with the ability to search insight gained from the images, manuals, and videos associated with the products.
The Computer Vision Read API is Azure's latest OCR technology (learn what's new) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents.

Box 3: Translator API -
Scenario: Product descriptions, transcripts, and alt text must be available in English, Spanish, and Portuguese.

Box 4: Azure Files -
Scenario: Store all raw insight data that was generated, so the data can be processed later.
Incorrect Answers:
The custom vision API from Microsoft Azure learns to recognize specific content in imagery and becomes smarter with training and time.
Reference:
https://docs.microsoft.com/en-us/azure/search/cognitive-search-concept-intro https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr

Comments 8 comments Click to expand

Comment 1

ID: 999398 User: M25 Badges: Highly Voted Relative Date: 2 years, 6 months ago Absolute Date: Tue 05 Sep 2023 13:03 Selected Answer: - Upvotes: 12

Source [Azure Blob Storage]
https://learn.microsoft.com/en-us/azure/search/cognitive-search-concept-intro
Import is the first step. Here, the indexer connects to a data source and pulls content (documents) into the search service. Azure Blob Storage is the most common resource used in AI enrichment scenarios, but any supported data source can provide content.
https://learn.microsoft.com/en-us/azure/search/search-indexer-overview#document-cracking
• When the document is a file with embedded images, such as a PDF, the indexer extracts text, images, and metadata. Indexers can open files from Azure Blob Storage, Azure Data Lake Storage Gen2, and SharePoint.

Comment 1.1

ID: 999399 User: M25 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Tue 05 Sep 2023 13:04 Selected Answer: - Upvotes: 12

--> Cracking [Computer Vision API] --> Preparation [Translator API]
https://learn.microsoft.com/en-us/azure/search/cognitive-search-concept-intro
Built-in skills are based on the Azure AI services APIs: Azure AIComputer Vision and Language Service.
• Image processing skills include Optical Character Recognition (OCR) and identification of visual features, such as facial detection, image interpretation, image recognition (famous people and landmarks), or attributes like image orientation. These skills create text representations of image content for full text search in Azure Cognitive Search.
• Machine translation is provided by the Text Translation skill, often paired with language detection for multi-language solutions.

Comment 1.1.1

ID: 999400 User: M25 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Tue 05 Sep 2023 13:05 Selected Answer: - Upvotes: 19

--> Destination [Azure Blob Storage]
https://learn.microsoft.com/en-us/azure/search/cognitive-search-concept-intro
• Enriched content is generated during skillset execution, and is temporary unless you save it. You can enable an enrichment cache [Physically, the cache is stored in a blob container in your Azure Storage account, one per indexer.] to persist cracked documents and skill outputs for subsequent reuse during future skillset executions.
Exploration is the last step. Output is always a search index that you can query from a client app. Output can optionally be a knowledge store consisting of blobs and tables in Azure Storage that are accessed through data exploration tools or downstream processes. If you're creating a knowledge store, projections determine the data path for enriched content. The same enriched content can appear in both indexes and knowledge stores.

Comment 2

ID: 664969 User: Sharks82 Badges: Highly Voted Relative Date: 3 years, 6 months ago Absolute Date: Sat 10 Sep 2022 00:37 Selected Answer: - Upvotes: 7

Given answer is correct

Comment 3

ID: 1564603 User: tech_rum Badges: Most Recent Relative Date: 10 months, 2 weeks ago Absolute Date: Tue 29 Apr 2025 07:46 Selected Answer: - Upvotes: 1

In this case study
You are not just storing documents.
You have requirements like:
- Real-time stock updates into the search index.
- Multilingual structured data (English, Spanish, Portuguese).
- Enriched search over product descriptions, images, videos, etc.
➡ Because of the "smart e-commerce" project's structured metadata and real-time stock update needs, Cosmos DB is the better choice as the destination.

Comment 4

ID: 1236232 User: GoldBear Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 24 Jun 2024 13:02 Selected Answer: - Upvotes: 3

This link has a diagram with the flow chart.
https://learn.microsoft.com/en-us/azure/architecture/solution-ideas/articles/cognitive-search-with-skillsets

Comment 5

ID: 1236063 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 24 Jun 2024 02:06 Selected Answer: - Upvotes: 1

Given the fact that the enriched data needs to be used for further processing, Azure blob storage make sense, as it will support any file format, and is scalable and will able to support multiple processing services.

Comment 5.1

ID: 1236064 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 24 Jun 2024 02:06 Selected Answer: - Upvotes: 1

as destination

29. AI-102 Topic 14 Question 1

Sequence
97
Discussion ID
76535
Source URL
https://www.examtopics.com/discussions/microsoft/view/76535-exam-ai-102-topic-14-question-1-discussion/
Posted By
Moody_L
Posted At
June 3, 2022, 2:15 p.m.

Question

You are developing the document processing workflow.
You need to identify which API endpoints to use to extract text from the financial documents. The solution must meet the document processing requirements.
Which two API endpoints should you identify? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. /vision/v3.1/read/analyzeResults
  • B. /formrecognizer/v2.0/custom/models/{modelId}/analyze
  • C. /formrecognizer/v2.0/prebuilt/receipt/analyze
  • D. /vision/v3.1/describe
  • E. /vision/v3.1/read/analyze

Suggested Answer

BC

Answer Description Click to expand


Community Answer Votes

Comments 21 comments Click to expand

Comment 1

ID: 639192 User: rupert1o1N Badges: Highly Voted Relative Date: 3 years, 7 months ago Absolute Date: Fri 29 Jul 2022 13:58 Selected Answer: - Upvotes: 17

guys so what is the correct answer

Comment 1.1

ID: 1284582 User: mustafaalhnuty Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Mon 16 Sep 2024 11:55 Selected Answer: - Upvotes: 1

BC you need receipt model as requerment say

Comment 1.2

ID: 1273818 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Wed 28 Aug 2024 06:53 Selected Answer: - Upvotes: 4

B E is the right answer

Comment 1.2.1

ID: 1286865 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Fri 20 Sep 2024 15:53 Selected Answer: - Upvotes: 2

B & E I would say are correct. C would be limited to just recipts where as E would allow loads of document types.

Comment 2

ID: 611072 User: Moody_L Badges: Highly Voted Relative Date: 3 years, 9 months ago Absolute Date: Fri 03 Jun 2022 14:15 Selected Answer: B Upvotes: 10

Contoso have a distinct standard for each office. Is the customized form recognizer more appropriate?

Comment 2.1

ID: 620071 User: sdokmak Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Wed 22 Jun 2022 02:39 Selected Answer: - Upvotes: 3

Agreed, also the receipt text extraction is separate to the financial documents, question is only about the financial documents.
"The document processing solution must be able to extract tables and text from the financial documents.
The document processing solution must be able to extract information from receipt images."

Comment 3

ID: 1564617 User: tech_rum Badges: Most Recent Relative Date: 10 months, 2 weeks ago Absolute Date: Tue 29 Apr 2025 08:17 Selected Answer: BE Upvotes: 1

B & E are correct

Comment 4

ID: 1284583 User: mustafaalhnuty Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Mon 16 Sep 2024 11:56 Selected Answer: - Upvotes: 3

BC 100%

Comment 5

ID: 1284226 User: flist Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 19:21 Selected Answer: - Upvotes: 1

B, E
https://learn.microsoft.com/en-us/rest/api/aiservices/custom-models/analyze-document?view=rest-aiservices-v2.1&tabs=HTTP
https://learn.microsoft.com/en-us/rest/api/computervision/read/read?view=rest-computervision-v3.1&tabs=HTTP

Comment 6

ID: 1273817 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Wed 28 Aug 2024 06:53 Selected Answer: BE Upvotes: 3

Analysis of Endpoint C
Relevance:
Pros: Ideal for extracting information from receipt images, which is one part of the requirements.
Cons: Limited to receipts and may not be flexible enough for extracting tables and text from a variety of standardized financial documents.

Analysis of Endpoint E
Relevance:
Pros: Flexible and can handle a wide range of document types including PDFs and JPEGs, fulfilling the requirement for processing standardized financial documents that contain fewer than 20 pages.
Cons: Does not specifically target receipt information but can still extract text from receipts.
Conclusion Answer is E

Comment 7

ID: 1253818 User: SorinGav Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 23 Jul 2024 19:23 Selected Answer: - Upvotes: 2

B, C: Need to extract receipts and tables. So the endpoints format are actually from a 2.x version of the doc intelligence form recognizer API which can be found here: https://learn.microsoft.com/en-us/rest/api/aiservices/custom-models/analyze-document?view=rest-aiservices-v2.1&tabs=HTTP
To extract tables, a call to a custom model could be used if the layout one is not between the options.

Comment 8

ID: 1246786 User: krzkrzkra Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 12 Jul 2024 15:39 Selected Answer: BC Upvotes: 2

Selected Answer: BC

Comment 9

ID: 1230488 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 14:29 Selected Answer: BC Upvotes: 4

BC is the correct answer.

Comment 10

ID: 1214428 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:34 Selected Answer: - Upvotes: 1

Is this question still available on May 21, 2024?

Comment 11

ID: 1214427 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:34 Selected Answer: - Upvotes: 2

Is this question still available on May 21, 2024?

Comment 12

ID: 1198624 User: azure_bimonster Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Fri 19 Apr 2024 15:20 Selected Answer: BC Upvotes: 2

B and C are correct as they are part of Azure Form Recognizer.

Comment 13

ID: 1188867 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 03 Apr 2024 21:22 Selected Answer: - Upvotes: 3

Please note that these endpoints are part of Azure’s Form Recognizer service, which is designed for extracting text, key/value pairs, and tables from documents. It’s a great fit for your document processing requirements. The other endpoints listed (Option A, D, and E) are part of the Computer Vision service and are more suited for different tasks such as analyzing results of Read operation, describing an image, and running Read operation respectively. They might not be the best fit for your specific document processing needs.

Comment 14

ID: 1181754 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 24 Mar 2024 16:09 Selected Answer: BC Upvotes: 3

To meet Contoso’s document processing requirements for standardized financial documents, consider the following two API endpoints:

/formrecognizer/v2.0/custom/models/{modelId}/analyze: This endpoint is ideal for extracting information from financial documents. It allows you to analyze custom models specifically designed for invoices and receipts. With this API, you can extract both tables and text from documents that adhere to distinct standards for each office1.
/formrecognizer/v2.0/prebuilt/receipt/analyze: Designed for receipt analysis, this endpoint simplifies the extraction of relevant data from receipt images. It’s well-suited for handling financial paperwork in PDF or JPEG formats, especially when dealing with documents containing fewer than 20 pages1.
By utilizing these endpoints, you can efficiently process financial documents, extract essential information, and enhance your document processing workflow.

Comment 15

ID: 1145904 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 10 Feb 2024 04:07 Selected Answer: BC Upvotes: 5

B for customization C for prebuilt receipt with minimum efforts

Comment 16

ID: 1144026 User: PCRamirez Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Thu 08 Feb 2024 04:49 Selected Answer: - Upvotes: 2

According to Windows Copilot:
B) /formrecognizer/v2.0/custom/models/{modelId}/analyze: This endpoint is part of the Form Recognizer service. It allows you to create custom models for extracting structured data from documents. By training a custom model with examples of financial documents, you can extract relevant information such as tables and text. Since the solution needs to process standardized financial documents with distinct standards for each office, using a custom model tailored to your specific requirements is a suitable choice.

C) /formrecognizer/v2.0/prebuilt/receipt/analyze: This endpoint is also part of the Form Recognizer service. It is specifically designed for extracting information from receipt images. Given that the solution must handle receipt data, this prebuilt receipt analysis endpoint is a good fit.

Comment 17

ID: 1121716 User: lastget Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 13 Jan 2024 15:14 Selected Answer: BC Upvotes: 3

Accroding to the problem, we need:
The document processing solution must be able to extract tables and text from the financial documents.
The document processing solution must be able to extract information from receipt images.

So the answer I will choose BC

30. AI-102 Topic 1 Question 24

Sequence
98
Discussion ID
74350
Source URL
https://www.examtopics.com/discussions/microsoft/view/74350-exam-ai-102-topic-1-question-24-discussion/
Posted By
kiassi1998
Posted At
April 24, 2022, 6:54 p.m.

Question

You have receipts that are accessible from a URL.
You need to extract data from the receipts by using Form Recognizer and the SDK. The solution must use a prebuilt model.
Which client and method should you use?

  • A. the FormRecognizerClient client and the StartRecognizeContentFromUri method
  • B. the FormTrainingClient client and the StartRecognizeContentFromUri method
  • C. the FormRecognizerClient client and the StartRecognizeReceiptsFromUri method
  • D. the FormTrainingClient client and the StartRecognizeReceiptsFromUri method

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 18 comments Click to expand

Comment 1

ID: 592712 User: 2ez4Zane Badges: Highly Voted Relative Date: 3 years, 10 months ago Absolute Date: Wed 27 Apr 2022 01:27 Selected Answer: - Upvotes: 34

Should be C
private static async Task AnalyzeReceipt(
FormRecognizerClient recognizerClient, string receiptUri)
{
RecognizedFormCollection receipts = await recognizerClient.StartRecognizeReceiptsFromUri(new Uri(receiptUrl)).WaitForCompletionAsync();

Comment 2

ID: 1067009 User: Prodyna Badges: Highly Voted Relative Date: 2 years, 4 months ago Absolute Date: Fri 10 Nov 2023 08:19 Selected Answer: - Upvotes: 17

was on november exam but it said "using Document Intelligence", the answer possibilities were the same

Comment 3

ID: 1564363 User: AshokChavan07 Badges: Most Recent Relative Date: 10 months, 2 weeks ago Absolute Date: Mon 28 Apr 2025 10:49 Selected Answer: C Upvotes: 2

c is the correct answer

Comment 4

ID: 1274364 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 29 Aug 2024 07:42 Selected Answer: C Upvotes: 2

C is the only answer

Comment 5

ID: 1225350 User: InfoMerp Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 11:58 Selected Answer: - Upvotes: 1

Answer C 100%

Comment 6

ID: 1219571 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 27 May 2024 15:17 Selected Answer: C Upvotes: 2

Right answer is C.

Comment 7

ID: 1218375 User: PeteColag Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 16:35 Selected Answer: - Upvotes: 2

Should be C. We are not doing training here, we are doing inference.

Comment 8

ID: 1217635 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 17:06 Selected Answer: C Upvotes: 1

C is right answer, but use word: Document Intelligence.

Comment 9

ID: 1203201 User: AzureGC Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sat 27 Apr 2024 18:51 Selected Answer: C Upvotes: 1

C: Based on the comments in the example

Comment 10

ID: 1193537 User: CDL_Learner Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 11 Apr 2024 09:40 Selected Answer: C Upvotes: 2

he best answer is C. the FormRecognizerClient client and the StartRecognizeReceiptsFromUri method.

Reason for choosing option C: The FormRecognizerClient is the client used to interact with the service in the Azure.AI.FormRecognizer namespace. The StartRecognizeReceiptsFromUri method is specifically designed to recognize and extract data from receipts, which is exactly what the question asks for. This method uses a prebuilt model trained on receipts, making it the ideal choice for this scenario.

Comment 10.1

ID: 1193538 User: CDL_Learner Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 11 Apr 2024 09:40 Selected Answer: - Upvotes: 2

Why other options are not suitable:

Option A: The StartRecognizeContentFromUri method is used to extract layout information such as tables, lines, words, and selection marks. It’s not specifically designed for receipts.
Option B: The FormTrainingClient is used to train custom models, not to extract data from documents using prebuilt models. Also, the StartRecognizeContentFromUri method, as mentioned above, is not specifically designed for receipts.
Option D: Similar to option B, the FormTrainingClient is not suitable for this scenario as it’s used for training custom models. The StartRecognizeReceiptsFromUri method would be correct if used with FormRecognizerClient.

Comment 11

ID: 1154796 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Tue 20 Feb 2024 16:03 Selected Answer: C Upvotes: 2

Seems like a typo on the answer. the code in the explanation clearly shows that it is FormRecognizerclient

Comment 12

ID: 1147840 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Mon 12 Feb 2024 06:52 Selected Answer: C Upvotes: 2

C is Correct

Comment 13

ID: 923327 User: nitz14 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Wed 14 Jun 2023 17:14 Selected Answer: C Upvotes: 4

To extract data from receipts using Form Recognizer and the SDK, while using a prebuilt model, you should use option C: the FormRecognizerClient client and the StartRecognizeReceiptsFromUri method.

Explanation:

Option A (the FormRecognizerClient client and the StartRecognizeContentFromUri method) is incorrect because the StartRecognizeContentFromUri method is used for general content recognition, not specifically for receipts.

Option B (the FormTrainingClient client and the StartRecognizeContentFromUri method) is incorrect because the FormTrainingClient client is used for training custom models, not for extracting data from prebuilt models.

Option D (the FormTrainingClient client and the StartRecognizeReceiptsFromUri method) is incorrect because the FormTrainingClient client is not used for extracting data; it is used for training custom models.

Therefore, the correct choice is option C: the FormRecognizerClient client and the StartRecognizeReceiptsFromUri method.

Comment 14

ID: 864060 User: Anichebe Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Fri 07 Apr 2023 18:01 Selected Answer: - Upvotes: 1

The correct answer is option C

Comment 15

ID: 839676 User: RAN_L Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Wed 15 Mar 2023 09:28 Selected Answer: C Upvotes: 2

The StartRecognizeReceiptsFromUri method of the FormRecognizerClient client is used to extract data from receipts using a prebuilt model.

Comment 16

ID: 779123 User: ap1234pa Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Tue 17 Jan 2023 18:21 Selected Answer: C Upvotes: 1

C is correct

Comment 17

ID: 776101 User: ap1234pa Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Sun 15 Jan 2023 02:56 Selected Answer: C Upvotes: 1

C is the answer

31. AI-102 Topic 2 Question 17

Sequence
99
Discussion ID
81643
Source URL
https://www.examtopics.com/discussions/microsoft/view/81643-exam-ai-102-topic-2-question-17-discussion/
Posted By
goo1994
Posted At
Sept. 11, 2022, 12:30 p.m.

Question

You have the following Python function for creating Azure Cognitive Services resources programmatically. def create_resource (resource_name, kind, account_tier, location) : parameters = CognitiveServicesAccount(sku=Sku(name=account_tier), kind=kind, location=location, properties={}) result = client.accounts.create(resource_group_name, resource_name, parameters)
You need to call the function to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically.
Which code should you use?

  • A. create_resource("res1", "ComputerVision", "F0", "westus")
  • B. create_resource("res1", "CustomVision.Prediction", "F0", "westus")
  • C. create_resource("res1", "ComputerVision", "S0", "westus")
  • D. create_resource("res1", "CustomVision.Prediction", "S0", "westus")

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 17 comments Click to expand

Comment 1

ID: 712299 User: ArchMelody Badges: Highly Voted Relative Date: 3 years, 4 months ago Absolute Date: Sun 06 Nov 2022 13:41 Selected Answer: A Upvotes: 46

Computer vision provide automatic vision solutions including captions. The key-phrase is "automatic". Therefore this answer should be obvious to everyone. I would expect more professionalism from people who request money for services like this one. Many questions here have incorrect and even contradictory answers... Shame!

Comment 1.1

ID: 892166 User: ulloo Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Mon 08 May 2023 14:42 Selected Answer: - Upvotes: 4

I agree.
To me it looks like many answers are being deliberately set to incorrect answers. Not sure why, though.

Comment 2

ID: 1561599 User: Bhutani Badges: Most Recent Relative Date: 10 months, 4 weeks ago Absolute Date: Fri 18 Apr 2025 08:53 Selected Answer: A Upvotes: 1

This is the best answer as per MS documentation.

Comment 3

ID: 1218332 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 14:57 Selected Answer: A Upvotes: 3

ComputerVision and F0.

Comment 4

ID: 1203320 User: upliftinghut Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sun 28 Apr 2024 01:26 Selected Answer: - Upvotes: 2

duplicate question

Comment 4.1

ID: 1253635 User: dirgiklis Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 23 Jul 2024 14:06 Selected Answer: - Upvotes: 2

Set 1 question 9.

Comment 5

ID: 1079883 User: sca88 Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Sat 25 Nov 2023 10:42 Selected Answer: A Upvotes: 1

The answer is A

Comment 6

ID: 1056564 User: trashbox Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Sun 29 Oct 2023 05:08 Selected Answer: - Upvotes: 2

Appeared on Oct/29/2023.

Comment 7

ID: 1022768 User: bare Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Mon 02 Oct 2023 05:25 Selected Answer: - Upvotes: 1

Very confusion, Does Computer Vision provide captions in Free? I think Custom Vision Do:
Computer Vision Free tire:
Instance | Features | Price
Free - Web/Container 20 transactions per minute | - |5,000 transactions free per month

Custom Vision Free Tire:
Instance | Transactions Per Second (TPS) | Features | Price
Free | 2 TPS | Upload, training and prediction transactions, Up to 2 projects, Up to 1 hour training per month | 5,000 training images free per project, 10,000 predictions per month

Comment 8

ID: 936157 User: Pixelmate Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 07:23 Selected Answer: - Upvotes: 2

This was on exam 28/06

Comment 9

ID: 935872 User: Pixelmate Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 00:23 Selected Answer: - Upvotes: 2

Asked in 28/06/2023 exam

Comment 10

ID: 920673 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sun 11 Jun 2023 14:44 Selected Answer: A Upvotes: 2

computer vision can generate captions for images

Comment 11

ID: 780205 User: ap1234pa Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Wed 18 Jan 2023 17:22 Selected Answer: A Upvotes: 2

Computer Vision has generate captions feature

Comment 12

ID: 741881 User: SSJA Badges: - Relative Date: 3 years, 3 months ago Absolute Date: Sun 11 Dec 2022 17:18 Selected Answer: B Upvotes: 1

This question was asked for free azure service. Do we have the generate caption feature supports this with free tier?
Reference - https://azure.microsoft.com/en-us/pricing/details/cognitive-services/computer-vision/

Comment 13

ID: 675430 User: be_ml_team Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Wed 21 Sep 2022 20:30 Selected Answer: A Upvotes: 2

A because computer vision provides tags

Comment 14

ID: 666021 User: goo1994 Badges: - Relative Date: 3 years, 6 months ago Absolute Date: Sun 11 Sep 2022 12:30 Selected Answer: - Upvotes: 2

Answer Should be A.

Comment 14.1

ID: 666022 User: goo1994 Badges: - Relative Date: 3 years, 6 months ago Absolute Date: Sun 11 Sep 2022 12:31 Selected Answer: - Upvotes: 3

one of the feature of Computer vision: The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the Image Analysis quickstart to get started.
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview

32. AI-102 Topic 2 Question 38

Sequence
106
Discussion ID
136424
Source URL
https://www.examtopics.com/discussions/microsoft/view/136424-exam-ai-102-topic-2-question-38-discussion/
Posted By
Murtuza
Posted At
March 17, 2024, 6:06 p.m.

Question

HOTSPOT
-

You are developing an application that will use the Azure AI Vision client library. The application has the following code.

image

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 9 comments Click to expand

Comment 1

ID: 1264931 User: anto69 Badges: Highly Voted Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 04:43 Selected Answer: - Upvotes: 10

No Yes Yes 100%

Comment 2

ID: 1381365 User: Mattt Badges: Most Recent Relative Date: 1 year ago Absolute Date: Mon 10 Mar 2025 14:00 Selected Answer: - Upvotes: 1

No, Yes, Yes

Comment 3

ID: 1279018 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 05 Sep 2024 17:23 Selected Answer: - Upvotes: 4

What kind of a question this is!!. Asking whether it reads the local image or not. How does it become something about AI? That's all about whether you know the python syntax for reading the file. This company should get an award for being in this state

Comment 3.1

ID: 1281471 User: mrg998 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 10 Sep 2024 12:29 Selected Answer: - Upvotes: 1

agree, if they want to include python or C# code they shold include modules on MS learn for it atleast

Comment 4

ID: 1251454 User: killershin Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Fri 19 Jul 2024 23:54 Selected Answer: - Upvotes: 1

No Yes Yes

Comment 5

ID: 1218242 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:22 Selected Answer: - Upvotes: 2

No
Yes
Yes

Comment 6

ID: 1213815 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 15:39 Selected Answer: - Upvotes: 1

No Yes Yes

Comment 7

ID: 1185716 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 29 Mar 2024 23:13 Selected Answer: - Upvotes: 2

Here are the answers to your questions:

The code will perform face recognition. No, the code does not include any features related to face recognition. It is only analyzing the image for tags and descriptions.
The code will list tags and their associated confidence. Yes, the code includes a loop that prints each tag and its associated confidence level.
The code will read an image file from the local file system. Yes, the code opens an image file from the local file system for analysis.

Comment 8

ID: 1175967 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 17 Mar 2024 18:06 Selected Answer: - Upvotes: 3

The given answers by exam topics are CORRECT

33. AI-102 Topic 2 Question 40

Sequence
107
Discussion ID
145640
Source URL
https://www.examtopics.com/discussions/microsoft/view/145640-exam-ai-102-topic-2-question-40-discussion/
Posted By
anto69
Posted At
Aug. 13, 2024, 4:46 a.m.

Question

HOTSPOT
-

You are developing an app that will use the Azure AI Vision API to analyze an image.

You need configure the request that will be used by the app to identify whether an image is clipart or a line drawing.

How should you complete the request? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 3 comments Click to expand

Comment 1

ID: 1278934 User: 9c652a0 Badges: Highly Voted Relative Date: 1 year, 6 months ago Absolute Date: Thu 05 Sep 2024 14:46 Selected Answer: - Upvotes: 8

POST, imagetype
https://learn.microsoft.com/en-us/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP

Comment 2

ID: 1381498 User: Mattt Badges: Most Recent Relative Date: 1 year ago Absolute Date: Mon 10 Mar 2025 14:11 Selected Answer: - Upvotes: 1

Post
imageType

Comment 3

ID: 1264933 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 04:46 Selected Answer: - Upvotes: 4

POST + imageType, for me and ChatGPT too

34. AI-102 Topic 2 Question 33

Sequence
108
Discussion ID
122636
Source URL
https://www.examtopics.com/discussions/microsoft/view/122636-exam-ai-102-topic-2-question-33-discussion/
Posted By
JDKJDKJDK
Posted At
Oct. 6, 2023, 7:30 a.m.

Question

You have a 20-GB video file named File1.avi that is stored on a local drive.

You need to index File1.avi by using the Azure Video Indexer website.

What should you do first?

  • A. Upload File1.avi to an Azure Storage queue.
  • B. Upload File1.avi to the Azure Video Indexer website.
  • C. Upload File1.avi to Microsoft OneDrive.
  • D. Upload File1.avi to the www.youtube.com webpage.

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 21 comments Click to expand

Comment 1

ID: 1026290 User: suryakalla Badges: Highly Voted Relative Date: 2 years, 5 months ago Absolute Date: Fri 06 Oct 2023 08:03 Selected Answer: - Upvotes: 21

This question is part of free assessment given by Microsoft and the answer in that was C.

Comment 1.1

ID: 1026726 User: Student2023 Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Fri 06 Oct 2023 16:47 Selected Answer: - Upvotes: 2

because that is the correct option

Comment 2

ID: 1183832 User: Mehe323 Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Wed 27 Mar 2024 04:48 Selected Answer: C Upvotes: 6

Upload file size and video duration
If uploading a file from your device, the file size limit is 2 GB.
If the video is uploaded from a URL, the file size limit is 30 GB. The URL must lead to an online media file with a media file extension (for example myvideo.MP4) and not a webpage such as https://www.youtube.com.

The file duration limit is 4 hours.

https://learn.microsoft.com/en-us/azure/azure-video-indexer/avi-support-matrix

Comment 3

ID: 1329338 User: pabsinaz Badges: Most Recent Relative Date: 1 year, 2 months ago Absolute Date: Fri 20 Dec 2024 08:33 Selected Answer: C Upvotes: 3

Option C because direct upload to Azure Video Indexer is limited to 2GB.

Comment 4

ID: 1297460 User: 4371883 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 14 Oct 2024 12:34 Selected Answer: - Upvotes: 2

got this in Oct 2024 exam

Comment 4.1

ID: 1299110 User: Slapp1n Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Thu 17 Oct 2024 09:27 Selected Answer: - Upvotes: 1

did you get any labs/simulations?

Comment 5

ID: 1285094 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 10:22 Selected Answer: C Upvotes: 3

max size for direct upload is 2GB so tou have to have it uploaded somewhere else.

Comment 6

ID: 1282925 User: AzureGeek79 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 13 Sep 2024 03:44 Selected Answer: - Upvotes: 1

ChatGPT says, "The correct option is B. Upload File1.avi to the Azure Video Indexer website. This is the appropriate first step to index your video file using Azure Video Indexer."

Comment 7

ID: 1275101 User: chani_ Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 30 Aug 2024 16:15 Selected Answer: - Upvotes: 1

Answer C: For large files, Azure Video Indexer supports uploading through a linked storage service like OneDrive.

Comment 8

ID: 1235167 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:12 Selected Answer: C Upvotes: 1

see if you can spot this one C.

Comment 9

ID: 1218255 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:31 Selected Answer: C Upvotes: 1

C is right answer.

Comment 10

ID: 1214199 User: TJ001 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 09:57 Selected Answer: - Upvotes: 4

Max file size for direct upload is 2 GB . 30 GB is through url.so answer C

Comment 11

ID: 1202824 User: Jimmy1017 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Fri 26 Apr 2024 23:34 Selected Answer: - Upvotes: 3

B. Upload File1.avi to the Azure Video Indexer website.

Explanation:

The Azure Video Indexer website is specifically designed to analyze and index video files, extracting insights such as keywords, faces, sentiments, and more.
Uploading File1.avi directly to the Azure Video Indexer website allows the platform to process the video file and generate the necessary metadata and insights.
Options A, C, and D are not relevant for indexing File1.avi using the Azure Video Indexer website. Uploading to Azure Storage queue (option A) is not appropriate for indexing videos. Microsoft OneDrive (option C) is a cloud storage service and doesn't provide video indexing capabilities like the Azure Video Indexer. Uploading to YouTube (option D) is also not relevant as the task is to index the video using the Azure Video Indexer website. Therefore, option B is the correct choice.

Comment 12

ID: 1191013 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 07 Apr 2024 16:26 Selected Answer: C Upvotes: 2

C is correct

Comment 13

ID: 1183110 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 26 Mar 2024 08:56 Selected Answer: - Upvotes: 1

Final Answer:
C

Comment 14

ID: 1154358 User: schmoofed Badges: - Relative Date: 2 years ago Absolute Date: Tue 20 Feb 2024 00:52 Selected Answer: C Upvotes: 2

Looks like C is correct due to the large size of the file being 20GB in the question. See article here: https://learn.microsoft.com/en-us/azure/azure-video-indexer/odrv-download

Comment 14.1

ID: 1322000 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 04 Dec 2024 16:44 Selected Answer: - Upvotes: 1

Doesn't that article clearly state that the file size can't be bigger than 2 GB, if you upload via URL? See section -> Troubleshoot uploading issuses, second from bottom.

Comment 15

ID: 1152270 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:51 Selected Answer: B Upvotes: 3

B is correct and B

Comment 15.1

ID: 1371550 User: gyaansastra Badges: - Relative Date: 1 year ago Absolute Date: Sun 09 Mar 2025 15:15 Selected Answer: - Upvotes: 1

The first step is to upload the video file directly to the Azure Video Indexer website. Azure Video Indexer will process the video, extract metadata, and generate insights.

By uploading the video file to the Azure Video Indexer website, you initiate the indexing process, which allows the service to analyze and create an index of the content in the video.

Comment 16

ID: 1131694 User: suzanne_exam Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Thu 25 Jan 2024 14:23 Selected Answer: - Upvotes: 2

C: There is a max byte array of 2gb when uploading direct to the indexer so it has to be via a url

Comment 17

ID: 1091851 User: MelMac Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Sat 09 Dec 2023 15:41 Selected Answer: C Upvotes: 1

https://learn.microsoft.com/en-us/azure/azure-video-indexer/considerations-when-use-at-scale

35. AI-102 Topic 2 Question 24

Sequence
111
Discussion ID
102686
Source URL
https://www.examtopics.com/discussions/microsoft/view/102686-exam-ai-102-topic-2-question-24-discussion/
Posted By
RAN_L
Posted At
March 15, 2023, 12:36 p.m.

Question

HOTSPOT
-

You make an API request and receive the results shown in the following exhibits.

image

image

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 7 comments Click to expand

Comment 1

ID: 1220333 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 17:19 Selected Answer: - Upvotes: 2

By feeling paizuri, you can see that the answers are ‘detect’ and ‘797,201’.

Comment 2

ID: 1220332 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 17:18 Selected Answer: - Upvotes: 1

By feeling the paisley, you can see that the answers are ‘detect’ and ‘797,201’.

Comment 3

ID: 1214373 User: takaimomoGcup Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 20 Nov 2024 16:59 Selected Answer: - Upvotes: 2

memorize. detects and 797, 201.

Comment 4

ID: 1182997 User: varinder82 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Thu 26 Sep 2024 04:15 Selected Answer: - Upvotes: 1

Final Answer:
1. detects
2. 797, 201

Comment 5

ID: 839835 User: RAN_L Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Fri 15 Sep 2023 11:36 Selected Answer: - Upvotes: 4

The API detects faces.

A face that can be used in person enrollment is at position 797, 201 within the photo.

This question provides information about an API request made to a face detection service. The request is sent to the endpoint "https://facetesting.cognitiveservices.azure.com/face/v1.0/detect" with the content of an image in the JSON format. The response from the API includes an array of detected faces, each with a unique faceId, faceRectangle, and faceAttributes.

The first statement asks what the API does with faces. The correct answer is "detects" because the endpoint used in the request is "/detect," which implies that the API is used for face detection.

The second statement asks about the position of a face that can be used for person enrollment. The face's position is specified in the "faceRectangle" field of the JSON response. The correct answer is "118, 754" because that is the "left" and "top" position of the face rectangle for the fourth face in the response, which has a high enough quality for recognition to be used in person enrollment.

Comment 5.1

ID: 869619 User: uira Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Fri 13 Oct 2023 18:44 Selected Answer: - Upvotes: 1

"118, 754" has low quality, isn't it?

Comment 5.1.1

ID: 1366317 User: Mattt Badges: - Relative Date: 1 year ago Absolute Date: Fri 07 Mar 2025 16:40 Selected Answer: - Upvotes: 1

No, it isn't.
It's the high quality

36. AI-102 Topic 4 Question 29

Sequence
112
Discussion ID
149782
Source URL
https://www.examtopics.com/discussions/microsoft/view/149782-exam-ai-102-topic-4-question-29-discussion/
Posted By
cheetah313
Posted At
Oct. 19, 2024, 11:16 a.m.

Question

You have an Azure subscription that contains an Azure AI Document Intelligence resource named DI1. DI1 uses the Standard S0 pricing tier.

You have the files shown in the following table.

image

Which files can you analyze by using DI1?

  • A. File 1.pdf only
  • B. File2.jpg only
  • C. File3.tiff only
  • D. File2.jpg and File3.tiff only
  • E. File1.pdf, File2.jpg, and File3.tiff

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 13 comments Click to expand

Comment 1

ID: 1312281 User: Alan_CA Badges: Highly Voted Relative Date: 1 year, 3 months ago Absolute Date: Thu 14 Nov 2024 20:12 Selected Answer: C Upvotes: 6

File1 > 500 Mb so NO
File 2 <50x50 pixels so NO
File 3 YES

Comment 2

ID: 1322771 User: chrillelundmark Badges: Highly Voted Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 14:55 Selected Answer: C Upvotes: 5

Have a look on any of the prebuilt models and you will see File3.tiff is the only one that fits.

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/prebuilt/contract?view=doc-intel-4.0.0#input-requirements

Comment 3

ID: 1365436 User: gyaansastra Badges: Most Recent Relative Date: 1 year ago Absolute Date: Wed 05 Mar 2025 15:43 Selected Answer: C Upvotes: 1

The correct answer is:

C. File3.tiffonly

Here's why:

File1.pdf(800 MB): The Standard S0 pricing tier for Azure AI Document Intelligence has file size limits, and 800 MB exceeds the allowable limit for file analysis.

File2.jpg(1 KB, 25 x 25 pixels): This file is too small (both in terms of size and resolution) to be analyzed effectively. The minimum recommended resolution for analysis is typically higher, as such a low resolution might not provide enough data for meaningful results.

File3.tiff(5 MB, 5000 x 5000 pixels): This file is within the size and resolution limits for the Standard S0 tier. The system is capable of analyzing high-resolution images like this.

Comment 4

ID: 1330559 User: kennynelcon Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sun 22 Dec 2024 21:35 Selected Answer: D Upvotes: 3

In the Standard (S0) tier of Azure AI Document Intelligence, the key constraints you usually run into are:

File‐size limits of roughly 200 MB for PDFs and up to 420 MB for images (JPG/PNG/TIFF).
Image dimension limits of up to around 10,000 × 10,000 pixels.

File1.pdf (800 MB) exceeds the typical 200 MB PDF limit, so it cannot be analyzed.
File2.jpg (1 KB, 25×25) is well below both the size and dimension limits for images, so it can be analyzed.
File3.tiff (5 MB, 5000×5000) is below the ~420 MB limit and within the 10,000×10,000 pixel limit, so it can be analyzed.
Therefore, the files you can analyze in DI1 (S0 tier) are File2.jpg and File3.tiff.

Comment 4.1

ID: 1340672 User: Architect_CTO Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Wed 15 Jan 2025 05:26 Selected Answer: - Upvotes: 2

File1.pdf (exceeds) the limit of 500MB not 200....

Comment 4.2

ID: 1332815 User: PK234 Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sat 28 Dec 2024 07:07 Selected Answer: - Upvotes: 2

Image dimensions must be between 50 pixels x 50 pixels and 10,000 pixels x 10,000 pixels. So the file2.jpg don't fit

Comment 5

ID: 1319307 User: nastolgia Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 16:21 Selected Answer: C Upvotes: 2

File 3 only

Comment 6

ID: 1306593 User: Homi Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Sun 03 Nov 2024 19:07 Selected Answer: C Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/prebuilt/read?view=doc-intel-4.0.0&tabs=sample-code

Comment 7

ID: 1304830 User: e41f7aa Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Wed 30 Oct 2024 04:27 Selected Answer: D Upvotes: 1

Between 50 x 50 pixels and 10,000 pixels x 10,000 pixels. So D

Comment 7.1

ID: 1322773 User: chrillelundmark Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 15:01 Selected Answer: - Upvotes: 1

What about the file size limitations? Are you sure that 25x25 is between 50x50 and 10,000x10,000? That's beyond my understanding.

Comment 8

ID: 1299963 User: cheetah313 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Sat 19 Oct 2024 11:16 Selected Answer: B Upvotes: 3

According to ChatGPT the max size for PDFs is 200 MB and for images 50 MB. so File 1 is ruled out. But then it also says that the max resolution for image files "should" not exceed 4200 x 4200 pixels.

Therefore I would say answer B: "File2.jpg only" is correct.

Comment 8.1

ID: 1304319 User: e41f7aa Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 08:07 Selected Answer: - Upvotes: 1

Between 50 x 50 pixels and 10,000 pixels x 10,000 pixels. So E

Comment 8.1.1

ID: 1304320 User: e41f7aa Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 08:07 Selected Answer: - Upvotes: 1

So D is the answer

37. AI-102 Topic 2 Question 22

Sequence
119
Discussion ID
102936
Source URL
https://www.examtopics.com/discussions/microsoft/view/102936-exam-ai-102-topic-2-question-22-discussion/
Posted By
jimbojambo
Posted At
March 17, 2023, 2:05 p.m.

Question

HOTSPOT -

You have a library that contains thousands of images.

You need to tag the images as photographs, drawings, or clipart.

Which service endpoint and response property should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 17 comments Click to expand

Comment 1

ID: 841990 User: jimbojambo Badges: Highly Voted Relative Date: 2 years, 12 months ago Absolute Date: Fri 17 Mar 2023 14:05 Selected Answer: - Upvotes: 57

I think that the answers are wrong. They should be:
1 - Computer Vision analyze image
2 - imageType

According to https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-detecting-image-types Computer Vision can analyze the content type of images, indicating whether an image is clip art or a line drawing

Comment 1.1

ID: 910902 User: mmaguero Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Wed 31 May 2023 08:21 Selected Answer: - Upvotes: 2

Agree, see json example at: https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b

Comment 1.1.1

ID: 1222392 User: PeteColag Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 01 Jun 2024 02:19 Selected Answer: - Upvotes: 1

This link no longer works

Comment 1.2

ID: 1214175 User: TJ001 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 09:26 Selected Answer: - Upvotes: 1

out of box option this is the best bet if not custom vision and train the model for object detection - more work

Comment 2

ID: 1056562 User: trashbox Badges: Highly Voted Relative Date: 2 years, 4 months ago Absolute Date: Sun 29 Oct 2023 05:07 Selected Answer: - Upvotes: 5

Appeared on Oct/29/2023.

Comment 3

ID: 1361764 User: gyaansastra Badges: Most Recent Relative Date: 1 year ago Absolute Date: Wed 26 Feb 2025 07:02 Selected Answer: - Upvotes: 1

Service Endpoint:
a. Computer Vision analyze images

This endpoint provides comprehensive analysis of images, including categorization.

Property:
c. imageType

This property can be used to identify whether an image is a photograph, drawing, or clipart.

Summary:
Service endpoint: Computer Vision analyze images
Property: imageType

By using the Computer Vision analyze images endpoint with the imageType property, we can effectively tag the images as photographs, drawings, or clipart.

Comment 4

ID: 1285083 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 09:59 Selected Answer: - Upvotes: 1

first is - Computer Vision analyze images
2nd - imageType

info here - https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-detecting-image-types

Comment 5

ID: 1248476 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 15 Jul 2024 19:26 Selected Answer: - Upvotes: 1

1. Computer Vision analyze images
2. imageType

Comment 6

ID: 1225485 User: NagaoShingo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 14:34 Selected Answer: - Upvotes: 1

1. Computer Vision analyze images
2. imageType

Comment 7

ID: 1220331 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:17 Selected Answer: - Upvotes: 1

The following aligned answer is correct.
1. Computer Vision analyze images
2. imageType

Comment 8

ID: 1218292 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 14:02 Selected Answer: - Upvotes: 1

1. Computer Vision analyze images
2. imageType

Comment 9

ID: 1214379 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:04 Selected Answer: - Upvotes: 1

Service endpoint should be "Computer Vision analyze images". Property should be "imageType".

Comment 10

ID: 1214377 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:04 Selected Answer: - Upvotes: 1

Service endpoint shoud be "Computer Vision analyze images". Property should be "imageType".

Comment 11

ID: 1039152 User: sl_mslconsulting Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Tue 10 Oct 2023 06:40 Selected Answer: - Upvotes: 2

I would say the answers are correct. Image type can only indicate whether an image is clip art or a line drawing. It can’t tell you if it’s a photograph or not - you can’t just assume that if the image isn’t a clip art or a line drawing will automatically be categorized as a photograph. It’s a very sloppy solution IMO. Besides you have thousands of images and it’s a good reason to create your own model.

Comment 11.1

ID: 1222401 User: PeteColag Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 01 Jun 2024 02:41 Selected Answer: - Upvotes: 1

The example at https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-detecting-image-types

implies that if the "imageType": {
"clipArtType": 0,
"lineDrawingType": 0
},
then we have an image.

Comment 12

ID: 924986 User: HarshSharma786 Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Fri 16 Jun 2023 10:32 Selected Answer: - Upvotes: 5

To tag images as photographs, drawings, or clipart, you should use the following service endpoint and response property:

Service endpoint: Computer Vision image classification
Property: imageType

The Computer Vision image classification endpoint allows you to classify images into different categories, and the imageType property specifically provides information about the type of image, such as whether it is a photograph, drawing, or clipart.

Comment 13

ID: 892206 User: ulloo Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Mon 08 May 2023 15:37 Selected Answer: - Upvotes: 2

ChatGPT:
You can use the Microsoft Azure Computer Vision API to tag the images as photographs, drawings, or clipart.

You can call the "Describe Image" API endpoint and use the "imageType" property of the response to determine if the image is a photograph, a drawing, or clipart. The "imageType" property can have the following values:

"Clipart": Indicates that the image is a clipart.
"LineDrawing": Indicates that the image is a line drawing.
"Photograph": Indicates that the image is a photograph.
You can send an HTTP POST request to the API endpoint with the image file as the request body and specify the "imageType" in the "visualFeatures" parameter. The API will return a JSON response containing the "imageType" property along with other properties such as "tags", "description", and "categories".

38. AI-102 Topic 2 Question 34

Sequence
120
Discussion ID
134929
Source URL
https://www.examtopics.com/discussions/microsoft/view/134929-exam-ai-102-topic-2-question-34-discussion/
Posted By
audlindr
Posted At
Feb. 29, 2024, 6:27 p.m.

Question

HOTSPOT
-

You are building an app that will share user images.

You need to configure the app to meet the following requirements:

• Uploaded images must be scanned and any text must be extracted from the images.
• Extracted text must be analyzed for the presence of profane language.
• The solution must minimize development effort.

What should you use for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 15 comments Click to expand

Comment 1

ID: 1185670 User: Murtuza Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Fri 29 Mar 2024 22:02 Selected Answer: - Upvotes: 10

Text Extraction from Images: You can use an Optical Character Recognition (OCR) service to scan and extract text from the uploaded images. Computer Vision API are examples of services that provide OCR capabilities.
Profanity Check: Once the text is extracted, you can use a text analytics service to analyze the presence of profane language. Services Content Moderator API can help identify and filter out inappropriate content.

Comment 1.1

ID: 1297160 User: aa18a1a Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Mon 14 Oct 2024 02:55 Selected Answer: - Upvotes: 1

Agree with this answer. Take a look at this small blurb on this portion of the Microsoft docs:
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-ocr?view=doc-intel-4.0.0
"If you want to extract text from PDFs, Office files, or HTML documents and document images, use the Document Intelligence Read OCR model. It's optimized for text-heavy digital and scanned documents..."
There's nowhere in this question that states these images will be "document images". Despite the fact that document intelligence CAN accept images, I've received no indication that we should assume these images will be documents.

Comment 2

ID: 1226929 User: haby Badges: Highly Voted Relative Date: 1 year, 9 months ago Absolute Date: Sat 08 Jun 2024 22:59 Selected Answer: - Upvotes: 6

Azure AI vision for me. Both Document Intelligence and AI vision have OCR, but document inte is more for structure and semi-structure files, like pdf doc, w-2, forms, etc. AI vision is easier to extract simple text from general images.

Comment 3

ID: 1361801 User: gyaansastra Badges: Most Recent Relative Date: 1 year ago Absolute Date: Wed 26 Feb 2025 08:02 Selected Answer: - Upvotes: 3

Text Extraction:
Azure AI Computer Vision: This service can effectively extract text from images using OCR (Optical Character Recognition).

Profane Language Detection:
Content Moderator: This service is designed to analyze text for the presence of profane language and other inappropriate content.

Comment 4

ID: 1259633 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Fri 02 Aug 2024 04:29 Selected Answer: - Upvotes: 3

Text extraction: Azure AI Document Intelligence
Profane language detection: Content Moderator (now Azure AI Content Safety)

Comment 5

ID: 1235702 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 07:30 Selected Answer: - Upvotes: 2

Azure AI Computer Vision has ReadApi for OCR feature to extract text from images, and content moderator for removing anything profane

Comment 5.1

ID: 1235703 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 07:32 Selected Answer: - Upvotes: 1

Azure AI Document Intelligence (formerly Form Recognizer):
Purpose: Specialized document analysis.
Capabilities:
Advanced OCR to extract text, key-value pairs, and table data from documents.
Customizable models to extract structured data from forms and invoices.
Tailored for processing documents such as forms, receipts, invoices, business cards, and other structured documents.
Here we are dealing with scanned images

Comment 6

ID: 1225483 User: NagaoShingo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 14:33 Selected Answer: - Upvotes: 1

1. Azure AI Document Intelligence
2. Content Moderator

Comment 6.1

ID: 1244469 User: Toby86 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 08 Jul 2024 18:43 Selected Answer: - Upvotes: 3

We're looking at Images, not documents. And if the images contained documents, we don't know that from the question. Answer is correct

Comment 6.1.1

ID: 1285095 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 10:26 Selected Answer: - Upvotes: 1

agree, they have not said if the images will be images of documents or not. it could be images of anything that we need to extract from

Comment 7

ID: 1218254 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:31 Selected Answer: - Upvotes: 1

1. Azure AI Document Intelligence
2. Content Moderator

Comment 8

ID: 1217351 User: funny_penguin Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 11:35 Selected Answer: - Upvotes: 1

on exam but with AI Content Safety instead of content moderator. For the first, I chose Azure AI Document Intelligence

Comment 9

ID: 1204083 User: ShardulShende Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Mon 29 Apr 2024 17:21 Selected Answer: - Upvotes: 2

Why not Azure AI Document Intelligence for the first option?

Comment 9.1

ID: 1204085 User: ShardulShende Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Mon 29 Apr 2024 17:24 Selected Answer: - Upvotes: 1

Should be Azure AI Document Intelligence. Document Intelligence Read Optical Character Recognition (OCR) model runs at a higher resolution than Azure AI Vision Read and extracts print and handwritten text from PDF documents and scanned images.
https://learn.microsoft.com/en-us/answers/questions/1512283/vision-studio-vs-document-intelligence-studio-ocr

Comment 10

ID: 1162838 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Thu 29 Feb 2024 18:27 Selected Answer: - Upvotes: 4

I hope the Questions are revisited by Microsoft. Why would there be a question on deprecated services
https://learn.microsoft.com/en-us/azure/ai-services/content-moderator/overview
Azure Content Moderator is being deprecated in February 2024, and will be retired by February 2027. It is being replaced by Azure AI Content Safety, which offers advanced AI features and enhanced performance.

39. AI-102 Topic 3 Question 6

Sequence
121
Discussion ID
60473
Source URL
https://www.examtopics.com/discussions/microsoft/view/60473-exam-ai-102-topic-3-question-6-discussion/
Posted By
SuperPetey
Posted At
Aug. 24, 2021, 9:51 a.m.

Question

DRAG DROP -
You train a Custom Vision model used in a mobile app.
You receive 1,000 new images that do not have any associated data.
You need to use the images to retrain the model. The solution must minimize how long it takes to retrain the model.
Which three actions should you perform in the Custom Vision portal? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand


Comments 26 comments Click to expand

Comment 1

ID: 430575 User: SuperPetey Badges: Highly Voted Relative Date: 4 years, 6 months ago Absolute Date: Tue 24 Aug 2021 09:51 Selected Answer: - Upvotes: 147

The given answer is incorrect. The question emphasizes two things - 1) the model has already been trained 2) the solution should be expedient. The given answer will be very slow to manually tag 1,000 images. instead:

1.) upload all the images
2.) Get suggested tags
3.) Review the suggestions and confirm the tags

reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/suggested-tags

Comment 1.1

ID: 444179 User: Derin_tade Badges: - Relative Date: 4 years, 5 months ago Absolute Date: Mon 13 Sep 2021 22:17 Selected Answer: - Upvotes: 3

Thank you.

Comment 1.2

ID: 464946 User: vominhtri854 Badges: - Relative Date: 4 years, 4 months ago Absolute Date: Wed 20 Oct 2021 08:15 Selected Answer: - Upvotes: 3

When you tag images for a Custom Vision model, the service uses the latest trained iteration of the model to predict the labels of untagged images
we need latest trained to predict the labels, but this isn NOT HAVE ANY ASSOCIATED DATA

Comment 2

ID: 673909 User: STH Badges: Highly Voted Relative Date: 3 years, 5 months ago Absolute Date: Tue 20 Sep 2022 09:44 Selected Answer: - Upvotes: 9

Answer is correct.

When uploading all images from a same folder, you can tag all of them with the same value at the same time.
Then you wont tag all 1000 images one by one, but only once by category (which is time saving as the question ask for).

Also, even if model is already trained, images are uploaded to workspace, and not to specific trained iteration.
You then cannot get tag suggestion when importing an image. There is none, that feature simply does not exist.

Try by yourself :
https://learn.microsoft.com/en-us/training/modules/classify-images/5-exercise-custom-vision

Comment 2.1

ID: 675847 User: STH Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Thu 22 Sep 2022 09:36 Selected Answer: - Upvotes: 15

my bad the feature is real :
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/suggested-tags

so right answer is
- Upload all
- Get suggested tags
- Review and confirm tags

Comment 3

ID: 1361866 User: gyaansastra Badges: Most Recent Relative Date: 1 year ago Absolute Date: Wed 26 Feb 2025 11:28 Selected Answer: - Upvotes: 1

Based on best practices for the Custom Vision service, the correct sequence of three actions would be:

Upload all the images.
Get suggested tags.
Review the suggestions and confirm the tags.

This sequence minimizes retraining time because:

First, you need to get all 1,000 images into the Custom Vision portal
Then, instead of manually tagging all images (which would be time-consuming), you use the auto-tagging/suggestion feature
Finally, you review and confirm these suggested tags to ensure accuracy before retraining

This approach leverages Custom Vision's automated tagging capabilities to significantly reduce the manual effort required, while still maintaining quality through human review of the suggestions.

Comment 4

ID: 1287918 User: Skyhawks Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 22 Sep 2024 23:26 Selected Answer: - Upvotes: 2

1) Smart Labeler (Suggested Tags):

The Smart Labeler functionality uses the latest trained iteration of the model to predict labels for new images. Therefore, even if the images are new, as long as they share similarities with what the model has already seen, the suggested tags feature can save time and effort.

Official Documentation: The Azure documentation clearly states that the Smart Labeler can automatically suggest tags for uploaded images, provided they are similar in context to previously trained data.

2) Uploading All Images:

Bulk uploading images is the most time-efficient method. There is no need to manually categorize or upload by folder.

The official Azure documentation supports SuperPetey's reasoning

Comment 5

ID: 1248498 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 15 Jul 2024 19:59 Selected Answer: - Upvotes: 2

1.) upload all the images
2.) Get suggested tags
3.) Review the suggestions and confirm the tags

Comment 6

ID: 1244476 User: Toby86 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 08 Jul 2024 19:01 Selected Answer: - Upvotes: 1

Can't be correct. You wanna tell me people should manually tag 1000 images?
And how do you categorize them in folders when they have no association? Seems dumb
It has to be
1. Upload all the Images
2. Get suggested tags
3. Review the suggested Tags and confirm

Comment 7

ID: 1217562 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:10 Selected Answer: - Upvotes: 1

Group
Upload category
Tag

Comment 8

ID: 1204752 User: 9H3zmT6 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 01 May 2024 01:20 Selected Answer: - Upvotes: 1

This question was asked in the actual exam on April 30, 2024 (+9:00, Japan). I think SuperPetey's answer is CORRECT, because I passed the AI-102 exam with a score of 917/1000. Thank you very much.

Comment 8.1

ID: 1217564 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:12 Selected Answer: - Upvotes: 1

So questions registered in 2021 will still be on the exam in April 2024? Japan is a scary country.

Comment 9

ID: 1184629 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 28 Mar 2024 09:06 Selected Answer: - Upvotes: 2

Final Answer
- Upload all
- Get suggested tags
- Review and confirm tags

Comment 10

ID: 1152276 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:59 Selected Answer: - Upvotes: 4

To minimize the time required for retraining the model, the correct three steps are:

Upload all images: First, you need to bulk upload the 1000 new images to the Custom Vision service. This is the foundational step for preparing the data.

Get suggested tags: Utilize Custom Vision's functionality to automatically suggest tags for the uploaded images. This can significantly reduce the workload of manual tagging.

Review and confirm suggested tags: Finally, manually review and confirm the tags suggested by the system to ensure their accuracy. Then, use these tagged images to retrain the model.

This process leverages the automation tools provided by Custom Vision to streamline and expedite the data preparation process, particularly effective when dealing with a large number of untagged images.

Comment 11

ID: 1083383 User: tdctdc Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Wed 29 Nov 2023 12:39 Selected Answer: - Upvotes: 1

Well, it's a bit confusing. In both cases (ET answers and SuperPetey suggestion) - we will have to walk through the pictures manually if there is no info about them. IF they are stored in class folders - the ET answer is less time consuming, if not - it's not possible to tell if separating them manually or manual check of suggested tags will take less time.

Comment 12

ID: 1040135 User: sl_mslconsulting Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Wed 11 Oct 2023 05:10 Selected Answer: - Upvotes: 2

The answer is correct - there is no magic here. You can’t suggest any new tags based on the model you currently have. Read the limitations of the smart labeler carefully: When to use Smart Labeler
Keep the following limitations in mind:
You should only request suggested tags for images whose tags have already been trained on once. Don't get suggestions for a new tag that you're just beginning to train.

Comment 12.1

ID: 1126847 User: josebernabeo Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Fri 19 Jan 2024 18:26 Selected Answer: - Upvotes: 3

"When you tag images for a Custom Vision model, the service uses the latest trained iteration of the model to predict the labels of new images"

source: https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/suggested-tags

Comment 13

ID: 633039 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 15:19 Selected Answer: - Upvotes: 3

Answer given would be only option IF model had not already been trained with images, so...
I agree with SuperPetey et al...

Upload
Get suggested tags
Review and confirm tags

Comment 14

ID: 607991 User: Number00 Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Fri 27 May 2022 10:25 Selected Answer: - Upvotes: 4

I agree with SuperPetey. The answer should be

1.) upload all the images
2.) Get suggested tags
3.) Review the suggestions and confirm the tags

Reason being that using the tools(suggested tags) would still applied to the new 1000 images item, even if those 1000 images doesn't associate with the original data pool. So, that means tagging even 1 less images using the suggested tags would still be faster than manually tagging them.
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/suggested-tags

Comment 15

ID: 558708 User: reachmymind Badges: - Relative Date: 4 years ago Absolute Date: Tue 01 Mar 2022 12:48 Selected Answer: - Upvotes: 1

1.) Upload all the images
2.) Get suggested tags
3.) Review the suggestions and confirm the tags

If an image does not have any associated TAG, we can add a new one while reviewing

https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-improving-your-classifier

Comment 16

ID: 448092 User: EXCEL1177 Badges: - Relative Date: 4 years, 5 months ago Absolute Date: Mon 20 Sep 2021 11:47 Selected Answer: - Upvotes: 3

@superpetey, kindly read through the article in the link you shared, I just did and confirmed from it that the provided answer by the platform is correct.

Comment 16.1

ID: 455084 User: angie31 Badges: - Relative Date: 4 years, 5 months ago Absolute Date: Thu 30 Sep 2021 20:23 Selected Answer: - Upvotes: 3

"You should only request suggested tags for images whose content has already been trained once. Don't get suggestions for a new tag that you're just beginning to train." And the question says RETRAINING of an existing model to which we are adding new images. So the response is actually wrong and @superpetey is correct

Comment 16.1.1

ID: 455086 User: angie31 Badges: - Relative Date: 4 years, 5 months ago Absolute Date: Thu 30 Sep 2021 20:25 Selected Answer: - Upvotes: 5

AHHHH but the key word is 'DO NOT HAVE ANY ASSOCIATED DATA'. So the content of images is brand new!!! Therefore we cant use suggester and the response is correct!

Comment 16.1.1.1

ID: 522934 User: ThomasKong Badges: - Relative Date: 4 years, 1 month ago Absolute Date: Thu 13 Jan 2022 17:02 Selected Answer: - Upvotes: 1

I support your highlighted point to the right point. So the given answer should be correct.

Comment 16.1.1.2

ID: 1183891 User: Mehe323 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 27 Mar 2024 06:35 Selected Answer: - Upvotes: 2

The point of machine learning is that a model eventually LEARNS how to do things independently. Even though there is no associated data, there is previous learning done and existing labels can be used. I am not sure why we would need ML if we still have to do things manually all the time?

Comment 16.2

ID: 527532 User: GilEdwards Badges: - Relative Date: 4 years, 1 month ago Absolute Date: Wed 19 Jan 2022 14:31 Selected Answer: - Upvotes: 4

I disagree, the images are unlabeled, but there is nothing in the text of the question mentioning that there are new tags. I agree with SuperPetey.

40. AI-102 Topic 2 Question 19

Sequence
122
Discussion ID
83146
Source URL
https://www.examtopics.com/discussions/microsoft/view/83146-exam-ai-102-topic-2-question-19-discussion/
Posted By
firewind
Posted At
Sept. 22, 2022, 12:24 a.m.

Question

HOTSPOT -
You are building an app that will enable users to upload images. The solution must meet the following requirements:
* Automatically suggest alt text for the images.
* Detect inappropriate images and block them.
* Minimize development effort.
You need to recommend a computer vision endpoint for each requirement.
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand


Comments 11 comments Click to expand

Comment 1

ID: 727865 User: Tanmay1178 Badges: Highly Voted Relative Date: 2 years, 9 months ago Absolute Date: Sat 27 May 2023 00:11 Selected Answer: - Upvotes: 40

I think it is vision/v3.2/analyze/?visualFeatures=Adult,Description for both

Comment 2

ID: 1361754 User: gyaansastra Badges: Most Recent Relative Date: 1 year ago Absolute Date: Wed 26 Feb 2025 06:36 Selected Answer: - Upvotes: 3

Generate Alt Text:
https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Descnption: Despite the spelling error, this endpoint is intended to generate descriptions for images, which can be used as alt text.

Detect Inappropriate Content:
https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate: This endpoint is specifically designed to detect and block inappropriate images using the Content Moderator API.

Summary:
Generate alt text: https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Descnption

Detect inappropriate content: https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate

By using these endpoints, you can achieve the specified requirements while minimizing development effort.

Comment 3

ID: 1225487 User: NagaoShingo Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 15:35 Selected Answer: - Upvotes: 1

Adult Video

Comment 4

ID: 1220337 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 17:22 Selected Answer: - Upvotes: 2

Both will contain Adult,Description.

Comment 5

ID: 1214386 User: takaimomoGcup Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 20 Nov 2024 17:09 Selected Answer: - Upvotes: 2

Generate alt text should be "v3.2/analyze". Detect inappropriate content should be "v3.2/analyze".

Comment 6

ID: 1152259 User: evangelist Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 16 Aug 2024 22:37 Selected Answer: - Upvotes: 2

description ==>Alt text

Comment 7

ID: 1049658 User: propanther Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sun 21 Apr 2024 18:33 Selected Answer: - Upvotes: 2

Both are https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description

A string indicating what visual feature types to return. Multiple values should be comma-separated.
Valid visual feature types include:

Adult - detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected.
Description - describes the image content with a complete sentence in supported languages.

https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b

Comment 8

ID: 685352 User: Tickxit Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Mon 03 Apr 2023 11:39 Selected Answer: - Upvotes: 4

I think it is two times /analyze/?visualFeatures=Adult,Description

Comment 8.1

ID: 686152 User: Anulf Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Tue 04 Apr 2023 14:23 Selected Answer: - Upvotes: 1

Shouldn't it be option 2 in the first column ? "iteratio"

Comment 9

ID: 675553 User: firewind Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Wed 22 Mar 2023 01:24 Selected Answer: - Upvotes: 2

Generate alt text can use either analyze or describe. From the given option, I think it should be the analyze url too.
https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21f

Comment 9.1

ID: 890425 User: Rob77 Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Mon 06 Nov 2023 05:59 Selected Answer: - Upvotes: 1

Agreed, analyze in both answers

41. AI-102 Topic 2 Question 12

Sequence
124
Discussion ID
55440
Source URL
https://www.examtopics.com/discussions/microsoft/view/55440-exam-ai-102-topic-2-question-12-discussion/
Posted By
Jenny1
Posted At
June 16, 2021, 2:35 p.m.

Question

DRAG DROP -
You are developing a call to the Face API. The call must find similar faces from an existing list named employeefaces. The employeefaces list contains 60,000 images.
How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand

Box 1: LargeFaceListID -
LargeFaceList: Add a face to a specified large face list, up to 1,000,000 faces.
Note: Given query face's faceId, to search the similar-looking faces from a faceId array, a face list or a large face list. A "faceListId" is created by FaceList - Create containing persistedFaceIds that will not expire. And a "largeFaceListId" is created by LargeFaceList - Create containing persistedFaceIds that will also not expire.
Incorrect Answers:
Not "faceListId": Add a face to a specified face list, up to 1,000 faces.

Box 2: matchFace -
Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.
Reference:
https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar

Comments 21 comments Click to expand

Comment 1

ID: 383385 User: Jenny1 Badges: Highly Voted Relative Date: 4 years, 8 months ago Absolute Date: Wed 16 Jun 2021 14:35 Selected Answer: - Upvotes: 22

Correct.

Comment 2

ID: 597530 User: PHD_CHENG Badges: Highly Voted Relative Date: 3 years, 10 months ago Absolute Date: Fri 06 May 2022 05:33 Selected Answer: - Upvotes: 14

Facelist ID up to 1,000 faces; LargeFaceListId up to 1,000,000 faces
https://docs.microsoft.com/en-us/rest/api/faceapi/large-face-list

Comment 3

ID: 1361463 User: gyaansastra Badges: Most Recent Relative Date: 1 year ago Absolute Date: Tue 25 Feb 2025 14:57 Selected Answer: - Upvotes: 1

Correct here is the

Explanation:
largefacelistid: Use this parameter because employeefaces contains 60,000 images, which exceeds the limit of a regular facelist.

matchface: This mode is used to find similar faces.

By using largefacelistid and matchface, you ensure that the Face API call can handle the large number of images in the list and find similar faces effectively.

Comment 4

ID: 1278957 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 05 Sep 2024 15:31 Selected Answer: - Upvotes: 2

Another of the detective style finding answers from hidden clues in the english language usage. Microsoft is a truly remarkable org.

Similar face, must find and employes. Similar face and Must find might point to matchFace, but for employees it should be matchPerson (we need the person not similar looking). Now, what will you choose? Very hard. Remarkable piece of hmm sweetness.

Comment 4.1

ID: 1278960 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 05 Sep 2024 15:33 Selected Answer: - Upvotes: 2

And the fact that this remarkable org created a largeFaceList and a faceList and expects one to remember that piece of bad design. The size of the facelist should not be in the client calls, but at the server. Now if the facelist increases in size the clients need to change. What kind of org creates these kind of things and shamelessly ask that in certification exam. Microsoft is the answer.

Comment 4.1.1

ID: 1278964 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 05 Sep 2024 15:38 Selected Answer: - Upvotes: 1

I will go for matchFace considering the other clue "similar face" instead of person. Man, this question should be taken to court and microsoft should be whipped

Comment 5

ID: 1235751 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 10:47 Selected Answer: - Upvotes: 2

LargeFaceIdList
matchFace - reason why not matchPerson - becomes matchPerson compares agains larger person group and not one to one, here we are doing matchFace to find similar faces based on one to one analysis.

Comment 6

ID: 1234508 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 16:54 Selected Answer: - Upvotes: 1

1. LargeFaceListId
2. matchFace

Comment 7

ID: 1218344 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 15:25 Selected Answer: - Upvotes: 1

1. LargeFaceListId
2. matchFace

Comment 8

ID: 1182441 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 13:01 Selected Answer: - Upvotes: 4

Final Answer:
1. LargeFaceListId
2. matchFace ( matchPerson could return an empty list, but matchFace will not.)

Comment 9

ID: 1179861 User: Ody Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 06:25 Selected Answer: - Upvotes: 1

The key is "must find". matchPerson could return an empty list, but matchFace will not.

Comment 10

ID: 633003 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 14:04 Selected Answer: - Upvotes: 8

I'm leaning towards: largeFaceListId and matchedPerson

https://docs.microsoft.com/en-us/rest/api/faceapi/face/find-similar?tabs=HTTP#find-similar-results-example

"matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds.

"matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.

Comment 10.1

ID: 982563 User: makciek Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Wed 16 Aug 2023 14:50 Selected Answer: - Upvotes: 1

it says "The call must find similar faces from an existing list named employeefaces", similar being the keyword here so matchFace i think is correct.

Comment 10.2

ID: 1106885 User: ccie_pgh Badges: - Relative Date: 2 years, 2 months ago Absolute Date: Wed 27 Dec 2023 15:20 Selected Answer: - Upvotes: 2

Agree since code also has "maxNumOfCandidatesReturned", which would be ignored by "matchFace"... both "matchFace" and "matchPerson" find similars but only "matchPerson" will use internal thresholds.

Comment 11

ID: 612713 User: PHD_CHENG Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Tue 07 Jun 2022 13:28 Selected Answer: - Upvotes: 3

Was on exam 7 Jun 2022

Comment 12

ID: 608851 User: luishenriquesb Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Sun 29 May 2022 18:50 Selected Answer: - Upvotes: 5

it's should be largeFaceListId not LargeFaceListId (Capitalized). It's wouldn't work in a http request...

Comment 13

ID: 590814 User: gursimran_s Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Sat 23 Apr 2022 23:55 Selected Answer: - Upvotes: 1

The 1000 face parameter is for the faceIds and not the faceListId.So, it could be faceListId as well.

Comment 14

ID: 590812 User: gursimran_s Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Sat 23 Apr 2022 23:53 Selected Answer: - Upvotes: 1

Why not faceListId? Nothing specific mentioned on MS docs.

Comment 15

ID: 538265 User: klion Badges: - Relative Date: 4 years, 1 month ago Absolute Date: Wed 02 Feb 2022 00:55 Selected Answer: - Upvotes: 3

"matchPerson"

Find similar results example
Sample Request
HTTP
POST {Endpoint}/face/v1.0/findsimilars
Ocp-Apim-Subscription-Key: {API key}
Request Body
JSON
{
"faceId": "c5c24a82-6845-4031-9d5d-978df9175426",
"largeFaceListId": "sample_list",
"maxNumOfCandidatesReturned": 1,
"mode": "matchPerson"
}

https://docs.microsoft.com/en-us/rest/api/faceapi/face/find-similar

Comment 16

ID: 517042 User: sumanshu Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Wed 05 Jan 2022 00:13 Selected Answer: - Upvotes: 5

In Question it's given we have to find similar faces - So we have to use "matchFace" and because there are large list , so we have to use LargeFaceListID

Comment 17

ID: 473419 User: mikegsm Badges: - Relative Date: 4 years, 4 months ago Absolute Date: Sat 06 Nov 2021 12:18 Selected Answer: - Upvotes: 2

Seems Correct

42. AI-102 Topic 2 Question 41

Sequence
127
Discussion ID
150068
Source URL
https://www.examtopics.com/discussions/microsoft/view/150068-exam-ai-102-topic-2-question-41-discussion/
Posted By
a8da4af
Posted At
Oct. 22, 2024, 9:22 p.m.

Question

HOTSPOT
-

You have an Azure subscription that contains an Azure AI Video Indexer account.

You need to add a custom brand and logo to the indexer and configure an exclusion for the custom brand.

How should you complete the REST API call? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 7 comments Click to expand

Comment 1

ID: 1356068 User: FatFatSam Badges: - Relative Date: 1 year ago Absolute Date: Thu 13 Feb 2025 11:24 Selected Answer: - Upvotes: 2

there are 2 possible answer pairs
enabled: false
tags: [Excluded list]

For all requests, setting enabled to true puts the brand in the Include list for Azure AI Video Indexer to detect. Setting enabled to false puts the brand in the Exclude list, so Azure AI Video Indexer won't detect it.
https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-brands-model-how-to?tabs=customizeapi#exclude-brands-from-the-model
https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Brand
Please set the sample request in the Create-Brand API.

Comment 2

ID: 1315968 User: LCES Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 21 Nov 2024 20:03 Selected Answer: - Upvotes: 3

enabled: false

https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-brands-model-how-to?tabs=customizeapi#exclude-brands-from-the-model
For all requests, setting enabled to true puts the brand in the Include list for Azure AI Video Indexer to detect. Setting enabled to false puts the brand in the Exclude list, so Azure AI Video Indexer won't detect it.

Comment 3

ID: 1315886 User: FatFatSam Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 21 Nov 2024 16:53 Selected Answer: - Upvotes: 3

enabled: false
https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-brands-model-how-to?tabs=customizeapi#exclude-brands-from-the-model

For all requests, setting enabled to true puts the brand in the Include list for Azure AI Video Indexer to detect. Setting enabled to false puts the brand in the Exclude list, so Azure AI Video Indexer won't detect it.

Comment 4

ID: 1315349 User: mrwiti Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 20 Nov 2024 16:43 Selected Answer: - Upvotes: 3

Answer is correct: https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-brands-model-how-to?tabs=customizewebportal

For all requests, setting enabled to true puts the brand in the Include list for Azure AI Video Indexer to detect. Setting enabled to false puts the brand in the Exclude list, so Azure AI Video Indexer won't detect it.

Comment 5

ID: 1313541 User: 9cc71b6 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 17 Nov 2024 14:35 Selected Answer: - Upvotes: 1

State : Excluded

The question explicitly states you need to "configure an exclusion for the custom brand." This directly relates to the "state" field, which determines whether the custom brand is Excluded or Included in the indexing process.

If Excluded, the brand will not be analyzed or indexed.
If Included, the brand will be treated as part of the content, which contradicts the exclusion requirement.

Comment 6

ID: 1307949 User: Alan_CA Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Wed 06 Nov 2024 16:03 Selected Answer: - Upvotes: 1

not sure, Copilot says "exclude" : true
"customBrands": [ { "name": "customBrandName", "exclude": true } ]

Comment 7

ID: 1301684 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 22 Oct 2024 21:22 Selected Answer: - Upvotes: 1

Incorrect, according to ChatGPT responded that the correct options are "tags" and ["Excluded"].

To add a custom brand and logo to Azure AI Video Indexer and configure an exclusion for the brand, you need to focus on how to manage the exclusion of that custom brand. The correct combination of ANSWER_KEY and ANSWER_VALUE in the API call depends on defining the brand and ensuring it's excluded from indexing.

Here’s the appropriate configuration for the REST API call:

ANSWER_KEY: "tags"
ANSWER_VALUE: ["Excluded"]
This combination ensures that the custom brand ("Contoso") is tagged as "Excluded," meaning it will be excluded from the indexing process.

43. AI-102 Topic 1 Question 71

Sequence
130
Discussion ID
150683
Source URL
https://www.examtopics.com/discussions/microsoft/view/150683-exam-ai-102-topic-1-question-71-discussion/
Posted By
a8da4af
Posted At
Nov. 3, 2024, 10:09 p.m.

Question

HOTSPOT
-

You have 1,000 scanned images of hand-written survey responses. The surveys do NOT have a consistent layout.

You have an Azure subscription that contains an Azure AI Document Intelligence resource named AIdoc1.

You open Document Intelligence Studio and create a new project.

You need to extract data from the survey responses. The solution must minimize development effort.

To where should you upload the images, and which type of model should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 4 comments Click to expand

Comment 1

ID: 1306646 User: a8da4af Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Sun 03 Nov 2024 22:09 Selected Answer: - Upvotes: 11

Explanation:

Upload To - Azure Storage Account: Azure Document Intelligence (formerly known as Form Recognizer) works well with images and documents stored in an Azure Storage account, which provides direct integration for processing files. This is the most compatible and streamlined choice for handling scanned images in this scenario.

Model Type - Custom Neural: Since the surveys are handwritten and lack a consistent layout, the Custom neural model is the best choice. This model type is designed for flexibility and works effectively with unstructured or semi-structured documents. It can process handwritten text and extract information even when the layout varies across documents, which minimizes the need for custom development and complex template creation.

Comment 1.1

ID: 1332796 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Sat 28 Dec 2024 06:24 Selected Answer: - Upvotes: 3

Why Not Custom Template:
Consistency Requirement: Custom Template is better suited for documents with more consistent structures and layouts. Since your surveys do not have a consistent layout, Custom Template would be less effective and might require more effort to implement correctly.

Comment 1.1.1

ID: 1340376 User: coolDude_2K Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Tue 14 Jan 2025 15:09 Selected Answer: - Upvotes: 1

Custom Template is suitable to extract fields from highly structured documents with defined visual templates.

Comment 2

ID: 1349466 User: lbansal Badges: Most Recent Relative Date: 1 year, 1 month ago Absolute Date: Fri 31 Jan 2025 11:15 Selected Answer: - Upvotes: 1

from the link -
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/train/custom-neural?view=doc-intel-4.0.0
Custom Neural supports these document types -
Structured surveys, questionnaires
Semi-structured invoices, purchase orders

44. AI-102 Topic 2 Question 32

Sequence
131
Discussion ID
122477
Source URL
https://www.examtopics.com/discussions/microsoft/view/122477-exam-ai-102-topic-2-question-32-discussion/
Posted By
jangotango
Posted At
Oct. 5, 2023, 2:25 a.m.

Question

You are building an app that will include one million scanned magazine articles. Each article will be stored as an image file.

You need to configure the app to extract text from the images. The solution must minimize development effort.

What should you include in the solution?

  • A. Computer Vision Image Analysis
  • B. the Read API in Computer Vision
  • C. Form Recognizer
  • D. Azure Cognitive Service for Language

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 16 comments Click to expand

Comment 1

ID: 1025226 User: jangotango Badges: Highly Voted Relative Date: 2 years, 5 months ago Absolute Date: Thu 05 Oct 2023 02:28 Selected Answer: - Upvotes: 10

All answers should have a reference to prove the answer is true

Comment 2

ID: 1026261 User: JDKJDKJDK Badges: Highly Voted Relative Date: 2 years, 5 months ago Absolute Date: Fri 06 Oct 2023 07:27 Selected Answer: B Upvotes: 5

i also think its B

Use this interface to get the result of a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents.

https://learn.microsoft.com/en-us/rest/api/computervision/3.2preview2/read/read?tabs=HTTP

Comment 3

ID: 1348113 User: DriftKing Badges: Most Recent Relative Date: 1 year, 1 month ago Absolute Date: Tue 28 Jan 2025 20:01 Selected Answer: B Upvotes: 1

Computer Vision is now Azure AI Vision. You can use the Read API of it's OCR service.

The Optical Character Recognition (OCR) service extracts text from images. You can use the Read API to extract printed and handwritten text from photos and documents.
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview

Comment 4

ID: 1289262 User: shanakrs Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Thu 26 Sep 2024 04:46 Selected Answer: - Upvotes: 2

Answer is C
Form Recognizer

There is note in below microsoft doc mentioning about document images for text extraction

OCR for images (version 4.0)

https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-ocr

Please refer important section to Select the Read edition that best fits your requirements.
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-read-api

Comment 5

ID: 1285092 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 10:20 Selected Answer: - Upvotes: 1

Azure AI Document Intelligence is a more sophisticated solution. For example, it can identify key/value pairs, tables, and context-specific fields. If you want to deploy a complete document analysis solution for both extracting AND understanding text, Azure AI Document Intelligence is a good solution. In this case, Document Intelligence is a too advanced solution as the question doesn't provide any information about what to do with the extracted text, the focus here is on text extraction alone. So the answer is B

Comment 5.1

ID: 1285093 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 10:20 Selected Answer: - Upvotes: 1

so answer is B

Comment 6

ID: 1248316 User: CellCS Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 15 Jul 2024 14:07 Selected Answer: - Upvotes: 3

A. Computer vision does not have read model. Read model is document intelligence (formregnize) model

Comment 7

ID: 1246818 User: krzkrzkra Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 12 Jul 2024 16:24 Selected Answer: B Upvotes: 1

Selected Answer: B

Comment 8

ID: 1235168 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:12 Selected Answer: B Upvotes: 1

see if you can spot this one B.

Comment 9

ID: 1231324 User: MarceloManhaes Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 16 Jun 2024 13:15 Selected Answer: - Upvotes: 2

This is a good explanation from chat GPT why B is the correct fit and A is not:

A - Computer Vision Image Analysis: This is a service provided by Azure that can extract a wide variety of visual features from your images. It can determine whether an image contains adult content, find specific brands or objects, or find human faces. However, while it does have OCR capabilities, it is not specifically designed for large-scale text extraction from images.

B - Read API in Computer Vision: This is a part of the Azure Computer Vision service that is designed to extract printed and handwritten text from images. It uses state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents2. This could be a good fit for your needs as it is designed to handle large amounts of text in images.

the key here is option A it is not specifically designed for large-scale text extraction from images.

Comment 10

ID: 1218257 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:33 Selected Answer: B Upvotes: 2

the Read API in Computer Vision is right.

Comment 11

ID: 1191012 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 07 Apr 2024 16:24 Selected Answer: B Upvotes: 2

Read API is particularly adapted for text-heavy documents and it seems this is the case

Comment 12

ID: 1181359 User: AlviraTony Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 24 Mar 2024 08:42 Selected Answer: C Upvotes: 3

Document Intelligence(Previously Form Recognizer) reads small to large volume of text from images and PDF documents. For example: receipts, articles, and invoices

Comment 12.1

ID: 1183828 User: Mehe323 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 27 Mar 2024 04:40 Selected Answer: - Upvotes: 1

No I don't think so. Azure AI Document Intelligence is a more sophisticated solution. For example, it can identify key/value pairs, tables, and context-specific fields. If you want to deploy a complete document analysis solution for both extracting AND understanding text, Azure AI Document Intelligence is a good solution. In this case, Document Intelligence is a too advanced solution as the question doesn't provide any information about what to do with the extracted text, the focus here is on text extraction alone. So the answer is B.

Comment 13

ID: 1056483 User: devilsole Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Sun 29 Oct 2023 00:58 Selected Answer: - Upvotes: 2

i think it should be B
based on ChatGpt
using Azure Read API if:

Your primary focus is on text extraction from documents, forms, or images with printed text.
You have a batch processing requirement, and you need to process a large number of documents or images at once.
You need highly accurate text extraction with structured output.

Comment 14

ID: 1025225 User: jangotango Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Thu 05 Oct 2023 02:25 Selected Answer: - Upvotes: 1

Why not B?

45. AI-102 Topic 2 Question 39

Sequence
132
Discussion ID
145183
Source URL
https://www.examtopics.com/discussions/microsoft/view/145183-exam-ai-102-topic-2-question-39-discussion/
Posted By
moonlightc
Posted At
Aug. 6, 2024, 10:48 p.m.

Question

You are developing a method that uses the Azure AI Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code.

image

During testing, you discover that the call to the get_read_result method occurs before the read operation is complete.

You need to prevent the get_read_result method from proceeding until the read operation is complete.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

  • A. Remove the operation_id parameter.
  • B. Add code to verify the read_results.status value.
  • C. Add code to verify the status of the read_operation_location value.
  • D. Wrap the call to get_read_result within a loop that contains a delay.

Suggested Answer

BD

Answer Description Click to expand


Community Answer Votes

Comments 4 comments Click to expand

Comment 1

ID: 1264932 User: anto69 Badges: Highly Voted Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 04:44 Selected Answer: BD Upvotes: 7

B - D, repeated many times with both python an C#

Comment 2

ID: 1343849 User: TsuenKit Badges: Most Recent Relative Date: 1 year, 1 month ago Absolute Date: Mon 20 Jan 2025 21:21 Selected Answer: BD Upvotes: 1

The answer is B and D.
Explanation
-B. Add code to verify the read_results.status value.
We need to check this status to ensure that the operation is complete before retrieving the results. Adding code to check the read_results.status value provides the status of the OCR operation (e.g., "notStarted", "running", or "succeeded"), so we avoid to access results before they are available.

D. Wrap the call to get_read_result within a loop that contains a delay.
The OCR process in Azure AI Vision is asynchronous, so wrapping the call to get_read_result in a loop with a delay allows you to continuously check whether the operation is complete.

Comment 3

ID: 1281472 User: mrg998 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 10 Sep 2024 12:30 Selected Answer: - Upvotes: 1

B & D for sure

Comment 4

ID: 1261825 User: moonlightc Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 06 Aug 2024 22:48 Selected Answer: BD Upvotes: 2

B and D

46. AI-102 Topic 2 Question 42

Sequence
135
Discussion ID
150146
Source URL
https://www.examtopics.com/discussions/microsoft/view/150146-exam-ai-102-topic-2-question-42-discussion/
Posted By
Christian_garcia_martin
Posted At
Oct. 24, 2024, 8:09 a.m.

Question

You have a local folder that contains the files shown in the following table.

image

You need to analyze the files by using Azure AI Video Indexer.

Which files can you upload to the Video Indexer website?

  • A. File1 and File3 only
  • B. File1, File2, File3 and File4
  • C. File1, File2, and File3 only
  • D. File1 and File2 only
  • E. File1, File2, and File4 only

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 5 comments Click to expand

Comment 1

ID: 1339338 User: MRGPAL Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Sun 12 Jan 2025 04:21 Selected Answer: E Upvotes: 2

File 1 (WMV)
1. Length: 34 minutes (Within the 240-minute limit)
2. Size: 400 MB (Within the 2 GB limit)
3. Format: WMV (Supported)
4. Conclusion: Uploadable

File 2 (AVI)
1. Length: 90 minutes (Within the 240-minute limit)
2. Size: 1,200 MB (Within the 2 GB limit)
3. Format: AVI (Supported)
4. Conclusion: Uploadable

File 3 (MOV)
1. Length: 300 minutes (Exceeds the 240-minute limit)
2. Size: 980 MB (Within the 2 GB limit)
3. Format: MOV (Supported)
4. Conclusion: Not Uploadable (due to exceeding the length limit)

File 4 (MP4)
1. Length: 80 minutes (Within the 240-minute limit)
2. Size: 1,800 MB (Within the 2 GB limit)
3. Format: MP4 (Supported)
4. Conclusion: Uploadable

Comment 1.1

ID: 1340116 User: BenALGuhl Badges: - Relative Date: 1 year, 1 month ago Absolute Date: Tue 14 Jan 2025 02:49 Selected Answer: - Upvotes: 5

This is not correct. Based on current MS doc (14/Jan/2025): https://learn.microsoft.com/en-us/azure/azure-video-indexer/avi-support-matrix, the limit is 6h = 360m. Answer is B.

Comment 2

ID: 1308151 User: jolimon Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Wed 06 Nov 2024 21:44 Selected Answer: B Upvotes: 4

B is right answer 100%

Comment 3

ID: 1302325 User: Christian_garcia_martin Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Thu 24 Oct 2024 08:09 Selected Answer: - Upvotes: 4

Answer is right :
- size limit 2Gb uploaded from a device and 30Gb uploaded from url
- time 6 hours basic audio 12 hours
- available formats : WMV ,AVI ,MOV ,MP4

So all the files are allowed

Comment 3.1

ID: 1324240 User: Azure-JL Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Mon 09 Dec 2024 22:18 Selected Answer: - Upvotes: 3

https://learn.microsoft.com/en-us/azure/azure-video-indexer/avi-support-matrix

47. AI-102 Topic 3 Question 76

Sequence
136
Discussion ID
143318
Source URL
https://www.examtopics.com/discussions/microsoft/view/143318-exam-ai-102-topic-3-question-76-discussion/
Posted By
MarceloManhaes
Posted At
July 5, 2024, 3:57 a.m.

Question

You are building an internet-based training solution. The solution requires that a user's camera and microphone remain enabled.

You need to monitor a video stream of the user and verify that the user is alone and is not collaborating with another user. The solution must minimize development effort.

What should you include in the solution?

  • A. speech-to-text in the Azure AI Speech service
  • B. object detection in Azure AI Custom Vision
  • C. Spatial Analysis in Azure AI Vision
  • D. object detection in Azure AI Custom Vision

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 18 comments Click to expand

Comment 1

ID: 1337551 User: FatFatSam Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Tue 07 Jan 2025 13:57 Selected Answer: C Upvotes: 2

However, On 30 March 2025, Azure AI Vision Spatial Analysis will be retired.
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/spatial-analysis-operations

Comment 2

ID: 1333952 User: pabsinaz Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Mon 30 Dec 2024 08:24 Selected Answer: C Upvotes: 2

A speech solution makes no sense because you can collaborate and cheat visually without sound. Object detection requires a lot of effort and does not entirely meet the goal.

Comment 3

ID: 1329207 User: MASANASA Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Fri 20 Dec 2024 01:40 Selected Answer: A Upvotes: 1

must be A. look at Question " user's camera and microphone remain enabled" . According to Q, I expect that these are laptop's camera and microphone. so, this camera can not watch entire roop view. Detect Voice then STT must be right answear.

Comment 3.1

ID: 1329208 User: MASANASA Badges: - Relative Date: 1 year, 2 months ago Absolute Date: Fri 20 Dec 2024 01:42 Selected Answer: - Upvotes: 1

room view

Comment 4

ID: 1313667 User: 9cc71b6 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 17 Nov 2024 17:53 Selected Answer: A Upvotes: 1

This must be speech to text.

Comment 5

ID: 1303592 User: Christian_garcia_martin Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Sun 27 Oct 2024 14:25 Selected Answer: - Upvotes: 2

Spatial is right answer , option b and d is the same option , if this is a real MS exam it could get better a lot.

Comment 6

ID: 1296202 User: aa18a1a Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Fri 11 Oct 2024 20:05 Selected Answer: C Upvotes: 1

https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/spatial-analysis-operations
Below is the description for cognitiveservices.vision.spatialanalysis-personcount
"Counts people in a designated zone in the camera's field of view. The zone must be fully covered by a single camera in order for PersonCount to record an accurate total."

Comment 7

ID: 1251325 User: Spicewolf Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Fri 19 Jul 2024 18:41 Selected Answer: C Upvotes: 2

ChatGPT, Gemini and Claude is also voting for C.

Comment 7.1

ID: 1283965 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 09:46 Selected Answer: - Upvotes: 3

Risky to use Artificial intelligence on intelligence-free Microsoft questions

Comment 7.2

ID: 1283966 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 09:49 Selected Answer: - Upvotes: 1

But this is indeed correct. Spatial Analysis is what Microsoft is looking for

Comment 7.2.1

ID: 1283967 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 09:50 Selected Answer: - Upvotes: 1

But I wonder why not speech as well, but why over think

Comment 8

ID: 1248282 User: 10Dres Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 15 Jul 2024 12:57 Selected Answer: - Upvotes: 2

and what if the another person is out of the camera and is talking ?

Comment 9

ID: 1244960 User: Toby86 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 09 Jul 2024 16:34 Selected Answer: C Upvotes: 2

"How does Azure AI Vision analyze people in a physical space? The spatial analysis AI models detect and track movements in the video feed based on algorithms that identify the presence of one or more humans by a body bounding box."

Answer: C. Spatial Analysis

Comment 10

ID: 1244959 User: Toby86 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 09 Jul 2024 16:32 Selected Answer: - Upvotes: 1

"How does Azure AI Vision analyze people in a physical space? The spatial analysis AI models detect and track movements in the video feed based on algorithms that identify the presence of one or more humans by a body bounding box."

Comment 11

ID: 1244455 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 08 Jul 2024 17:56 Selected Answer: - Upvotes: 1

but why B and D are equal?

Comment 12

ID: 1243503 User: MarceloManhaes Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 06 Jul 2024 20:59 Selected Answer: C Upvotes: 2

To monitor a video stream of the user and verify that the user is alone and not collaborating with another user, you should include Spatial Analysis in Azure AI Vision in your solution. This service can analyze the spatial relationships between people, movements, and interactions in a physical space using video data. So, the correct answer is:

C. Spatial Analysis in Azure AI Vision

Comment 12.1

ID: 1243505 User: MarceloManhaes Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 06 Jul 2024 20:59 Selected Answer: - Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/intro-to-spatial-analysis-public-preview?tabs=sa

Comment 13

ID: 1242412 User: MarceloManhaes Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 05 Jul 2024 03:57 Selected Answer: B Upvotes: 1

For the specific scenario of monitoring a video stream to verify that a user is alone and not collaborating with another user, object detection using Azure AI Custom Vision is a more suitable choice. Object detection can identify and track objects or people within the video stream, allowing you to determine if multiple individuals are present. It aligns better with your goal and minimizes development effort.

The selected answer should be B

48. AI-102 Topic 1 Question 31

Sequence
137
Discussion ID
82100
Source URL
https://www.examtopics.com/discussions/microsoft/view/82100-exam-ai-102-topic-1-question-31-discussion/
Posted By
momentumhd
Posted At
Sept. 14, 2022, 8:55 a.m.

Question

You have an Azure Cognitive Search instance that indexes purchase orders by using Form Recognizer.
You need to analyze the extracted information by using Microsoft Power BI. The solution must minimize development effort.
What should you add to the indexer?

  • A. a projection group
  • B. a table projection
  • C. a file projection
  • D. an object projection

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 18 comments Click to expand

Comment 1

ID: 668688 User: momentumhd Badges: Highly Voted Relative Date: 3 years, 5 months ago Absolute Date: Wed 14 Sep 2022 08:55 Selected Answer: B Upvotes: 22

Should be B . Its for Tables the Power BI

" Use Power BI for data exploration. This tool works best when the data is in Azure Table Storage. Within Power BI, you can manipulate data into new tables that are easier to query and analyze"

Comment 1.1

ID: 863909 User: uira Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Fri 07 Apr 2023 15:18 Selected Answer: - Upvotes: 7

You receive a JSON object, so ObjectProjection is the most appropriate way to explore:
Objects: "Used when you need the full JSON representation of your data and enrichments in one JSON document. As with table projections, only valid JSON objects can be projected as objects, and shaping can help you do that."

As per https://learn.microsoft.com/en-us/azure/search/knowledge-store-projection-overview

Comment 2

ID: 1056728 User: Lion007 Badges: Highly Voted Relative Date: 2 years, 4 months ago Absolute Date: Sun 29 Oct 2023 11:08 Selected Answer: B Upvotes: 10

B is the correct answer. See below to understand the workflow:
Purchase Orders (POs) -> Form Recognizer -> OCR -> JSON (extracted info from POs) -> Shaper skill -> JSON -> Table Projection -> JSON -> Power BI

Ref: Define a table projection https://learn.microsoft.com/en-us/azure/search/knowledge-store-projections-examples#define-a-table-projection

Comment 3

ID: 1337192 User: PreetiLearner Badges: Most Recent Relative Date: 1 year, 2 months ago Absolute Date: Mon 06 Jan 2025 16:06 Selected Answer: D Upvotes: 1

Object projections allow you to shape the data in a way that is easily consumable by Power BI, facilitating seamless integration and analysis

Comment 4

ID: 1274609 User: famco Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 29 Aug 2024 18:02 Selected Answer: - Upvotes: 1

Microsoft, is this question about AI or what powerBI needs. Such a dysfunctional org that fails in all departments

Comment 5

ID: 1235161 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:06 Selected Answer: B Upvotes: 1

Table is answer.

Comment 6

ID: 1218239 User: meluk Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:20 Selected Answer: - Upvotes: 2

The correct answer is B. a table projection.

Here’s why:

A table projection in the indexer allows you to extract structured data from the indexed content.
By using a table projection, you can easily analyze the extracted information in Microsoft Power BI without extensive development effort.
It aligns with the requirement of minimizing development effort while enabling efficient data analysis.
Therefore, adding a table projection to the indexer is the most suitable choice for this scenario1.

Comment 7

ID: 1217622 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:54 Selected Answer: B Upvotes: 1

It must be B.

Comment 8

ID: 1213796 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 15:21 Selected Answer: B Upvotes: 1

B is right.

Comment 9

ID: 1204869 User: anntv252 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 01 May 2024 07:21 Selected Answer: - Upvotes: 1

Its for Tables the Power BI

Comment 10

ID: 1191779 User: Ronny05 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 08 Apr 2024 20:59 Selected Answer: - Upvotes: 1

Answer is Table Projection
Table projections are recommended for scenarios that call for data exploration, such as
analysis with Power BI or workloads that consume data frames.
https://learn.microsoft.com/en-us/azure/search/knowledge-store-projections-examples#define-a-table-projection

Comment 11

ID: 1147855 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Mon 12 Feb 2024 07:28 Selected Answer: B Upvotes: 3

B. a table projection

A table projection in the context of Azure Cognitive Search allows you to transform complex data structures into a flat tabular model. This is particularly useful when dealing with nested or complex data extracted from documents by Form Recognizer and indexed by Azure Cognitive Search. By projecting this data into a table format, you make it easier to import and analyze in Power BI, which excels at working with tabular data.

Comment 12

ID: 1078103 User: sca88 Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Thu 23 Nov 2023 06:54 Selected Answer: B Upvotes: 3

I think is B. Look the exercise
https://microsoftlearning.github.io/mslearn-knowledge-mining/Instructions/Labs/03-knowledge-store.html.
It says:
"You may want to normalize index records into a relational schema of tables, for query analysis and reporting with tools such as Microsoft Power BI."

Comment 13

ID: 1072447 User: kks0805 Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Thu 16 Nov 2023 14:42 Selected Answer: D Upvotes: 2

Should be D, the given answer is correct.

Comment 14

ID: 984236 User: james2033 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Fri 18 Aug 2023 08:58 Selected Answer: B Upvotes: 1

Table projection: https://learn.microsoft.com/en-us/azure/search/knowledge-store-projections-examples#define-a-table-projection

File projection: for binary

Object projection: for tree structure

Table projection: for data records

Comment 15

ID: 920449 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sun 11 Jun 2023 08:00 Selected Answer: B Upvotes: 2

Should be B. Power BI needs Table projections

Comment 16

ID: 895059 User: EliteAllen Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Thu 11 May 2023 14:33 Selected Answer: B Upvotes: 2

B. a table projection - Correct. The table projection feature in Azure Cognitive Search allows you to flatten complex data structures into a format that can be easily indexed and queried. This is especially useful when you want to analyze the extracted information using Power BI, as Power BI works best with flattened data structures.

Comment 17

ID: 879110 User: Pffffff Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Mon 24 Apr 2023 08:29 Selected Answer: D Upvotes: 2

ChatGPT: Object projections are a better option than tables in this scenario because they minimize the amount of data that needs to be transferred from the search index to Power BI, reducing latency and improving performance. Additionally, object projections are simpler to set up and require less configuration than tables.

To add an object projection to your indexer, you can use the Azure Cognitive Search portal or the Azure Cognitive Search REST API. You will need to define a mapping that specifies which fields from the source document should be included in the object projection, and how they should be mapped to JSON properties.

49. AI-102 Topic 2 Question 16

Sequence
144
Discussion ID
77610
Source URL
https://www.examtopics.com/discussions/microsoft/view/77610-exam-ai-102-topic-2-question-16-discussion/
Posted By
Eltooth
Posted At
July 18, 2022, 2:30 p.m.

Question

Your company uses an Azure Cognitive Services solution to detect faces in uploaded images. The method to detect the faces uses the following code.
image
You discover that the solution frequently fails to detect faces in blurred images and in images that contain sideways faces.
You need to increase the likelihood that the solution can detect faces in blurred images and images that contain sideways faces.
What should you do?

  • A. Use a different version of the Face API.
  • B. Use the Computer Vision service instead of the Face service.
  • C. Use the Identify method instead of the Detect method.
  • D. Change the detection model.

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 5 comments Click to expand

Comment 1

ID: 633017 User: Eltooth Badges: Highly Voted Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 14:30 Selected Answer: D Upvotes: 15

D is correct answer : change the detection model.

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/specify-detection-model#evaluate-different-models

Comment 2

ID: 1331190 User: Niko2345 Badges: Most Recent Relative Date: 1 year, 2 months ago Absolute Date: Tue 24 Dec 2024 18:07 Selected Answer: D Upvotes: 1

Thats the type of question one could answer without knwoing anything. For B and C its unnecessary to even provide code snippets
. From code we can see API and model. model seems more reasonable than API version.

Comment 3

ID: 1235176 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:16 Selected Answer: D Upvotes: 1

I say this answer is D.

Comment 4

ID: 1214391 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:14 Selected Answer: D Upvotes: 1

I use detection_02 or detection_03.

Comment 5

ID: 1152258 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:36 Selected Answer: D Upvotes: 1

it contains sideway faces, so change face API detection model 03

50. AI-102 Topic 2 Question 15

Sequence
153
Discussion ID
54794
Source URL
https://www.examtopics.com/discussions/microsoft/view/54794-exam-ai-102-topic-2-question-15-discussion/
Posted By
motu
Posted At
June 7, 2021, 1:11 p.m.

Question

HOTSPOT -
You develop an application that uses the Face API.
You need to add multiple images to a person group.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand


Comments 17 comments Click to expand

Comment 1

ID: 381531 User: leo822 Badges: Highly Voted Relative Date: 4 years, 2 months ago Absolute Date: Tue 14 Dec 2021 07:16 Selected Answer: - Upvotes: 72

AddFaceFromStreamAsync. Step 5 on https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces

Comment 2

ID: 394813 User: azurelearner666 Badges: Highly Voted Relative Date: 4 years, 2 months ago Absolute Date: Thu 30 Dec 2021 17:11 Selected Answer: - Upvotes: 55

Wrong!
A - Stream (this is correct)
B - AddFaceFromStreamAsync
(literally the same code from Step 5 at https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces)

Comment 3

ID: 1225489 User: NagaoShingo Badges: Most Recent Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 15:36 Selected Answer: - Upvotes: 6

1. Stream
2. AddFaceFromStreamAsync

Comment 4

ID: 1220254 User: hatanaoki Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 15:53 Selected Answer: - Upvotes: 1

1. Stream
2. AddFaceFromStreamAsync

Comment 5

ID: 1218347 User: nanaw770 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Mon 25 Nov 2024 16:29 Selected Answer: - Upvotes: 2

1. Stream
2. AddFaceFromStreamAsync

Comment 6

ID: 1217540 User: nanaw770 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 24 Nov 2024 16:58 Selected Answer: - Upvotes: 1

Stream and CreateAsync

Comment 7

ID: 1182437 User: varinder82 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Wed 25 Sep 2024 11:54 Selected Answer: - Upvotes: 2

Final Answer:
A - Stream (this is correct)
B - AddFaceFromStreamAsync

Comment 8

ID: 1054471 User: chenglim Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Fri 26 Apr 2024 12:45 Selected Answer: - Upvotes: 2

1. Stream
2. AddFaceFromStreamAsync
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/add-faces#step-5-add-faces-to-the-persons

Comment 9

ID: 909501 User: examworld Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Wed 29 Nov 2023 18:09 Selected Answer: - Upvotes: 1

Stream
AddFaceFromStreamAsync

Comment 10

ID: 887274 User: Mike19D Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Thu 02 Nov 2023 14:15 Selected Answer: - Upvotes: 1

Stream
AddFaceFromStreamAsync

Comment 11

ID: 645975 User: ninjia Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Feb 2023 19:44 Selected Answer: - Upvotes: 4

Box 1: Stream
Box 2: AddFaceFromStreamAsync

File.OpenRead() returns a Stream object.

using (Stream stream = File.OpenRead(imagePath))
{
await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
}
ref: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/add-faces#step-5-add-faces-to-the-persons

Comment 12

ID: 633012 User: Eltooth Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Wed 18 Jan 2023 15:25 Selected Answer: - Upvotes: 3

Stream and AddFaceFromStreamAsync are correct answers.

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/add-faces#step-5-add-faces-to-the-persons

Comment 13

ID: 545669 User: Deepusuraj Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Fri 12 Aug 2022 05:50 Selected Answer: - Upvotes: 2

Parallel.For(0, PersonCount, async i =>
{
Guid personId = persons[i].PersonId;
string personImageDir = @"/path/to/person/i/images";

foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
{
await WaitCallLimitPerSecondAsync();

using (Stream stream = File.OpenRead(imagePath))
{
await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
}
}
});

Comment 14

ID: 517083 User: sumanshu Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Tue 05 Jul 2022 00:36 Selected Answer: - Upvotes: 3

Stream and AddFaceFromStreamAsync

Comment 15

ID: 448381 User: Happiness20 Badges: - Relative Date: 3 years, 11 months ago Absolute Date: Sun 20 Mar 2022 21:25 Selected Answer: - Upvotes: 3

Parallel.For(0, PersonCount, async i =>
{
Guid personId = persons[i].PersonId;
string personImageDir = @"/path/to/person/i/images";

foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
{
await WaitCallLimitPerSecondAsync();

using (Stream stream = File.OpenRead(imagePath))
{
await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
}
}
});


https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces

Comment 16

ID: 448380 User: Happiness20 Badges: - Relative Date: 3 years, 11 months ago Absolute Date: Sun 20 Mar 2022 21:23 Selected Answer: - Upvotes: 2

Parallel.For(0, PersonCount, async i =>
{
Guid personId = persons[i].PersonId;
string personImageDir = @"/path/to/person/i/images";

foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
{
await WaitCallLimitPerSecondAsync();

using (Stream stream = File.OpenRead(imagePath))
{
await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
}
}
});


https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces

Comment 17

ID: 448378 User: Happiness20 Badges: - Relative Date: 3 years, 11 months ago Absolute Date: Sun 20 Mar 2022 21:21 Selected Answer: - Upvotes: 1

arallel.For(0, PersonCount, async i =>
{
Guid personId = persons[i].PersonId;
string personImageDir = @"/path/to/person/i/images";

foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
{
await WaitCallLimitPerSecondAsync();

using (Stream stream = File.OpenRead(imagePath))
{
await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
}
}
});

51. AI-102 Topic 4 Question 28

Sequence
154
Discussion ID
150257
Source URL
https://www.examtopics.com/discussions/microsoft/view/150257-exam-ai-102-topic-4-question-28-discussion/
Posted By
a8da4af
Posted At
Oct. 25, 2024, 9:27 p.m.

Question

You have an Azure subscription that contains an Azure AI Document Intelligence resource named AIdoc1 in the S0 tier.

You have the files shown in the following table.

image

You need to train a custom extraction model by using AIdoc1.

Which files can you upload to Document Intelligence Studio?

  • A. File1, File2, and File4 only
  • B. File2, and File5 only
  • C. File2, File4, and File5 only
  • D. File1, File2, File3, File4, and File5
  • E. File1 and File2 only

Suggested Answer

E

Answer Description Click to expand


Community Answer Votes

Comments 3 comments Click to expand

Comment 1

ID: 1302993 User: a8da4af Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Fri 25 Oct 2024 21:27 Selected Answer: E Upvotes: 11

Given the updated information that the S0 tier now supports files up to 500 MB, let’s re-evaluate each file's compatibility with Azure AI Document Intelligence:

File1 (JPG, 400MB) - Supported format and within the 500MB limit (supported).
File2 (PDF, 250MB) - Supported format and within the 500MB limit (supported).
File3 (PNG, 600MB) - Supported format but exceeds the 500MB limit (not supported).
File4 (XLSX, 900MB) - Unsupported format (not supported).
File5 (PDF, 160MB, password-locked) - Password-protected, which is not supported (not supported).
Given this, the files that can be uploaded are:

File1 (JPG, 400MB)
File2 (PDF, 250MB)
Correct answer: E. File1 and File2 only

Comment 2

ID: 1322763 User: chrillelundmark Badges: Most Recent Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 14:33 Selected Answer: E Upvotes: 1

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/service-limits?view=doc-intel-4.0.0

Azure AI Document Intelligence operates on the raw content of the PDF to extract text, tables, and other elements. Encryption prevents the service from accessing the necessary data for processing.

Comment 3

ID: 1304314 User: e41f7aa Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 08:00 Selected Answer: E Upvotes: 3

File1 and File2 only

52. AI-102 Topic 1 Question 64

Sequence
155
Discussion ID
135071
Source URL
https://www.examtopics.com/discussions/microsoft/view/135071-exam-ai-102-topic-1-question-64-discussion/
Posted By
Razvan_C
Posted At
March 2, 2024, 10:44 p.m.

Question

You plan to perform predictive maintenance.

You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets.

You need to identify unusual values in each time series to help predict machinery failures.

Which Azure service should you use?

  • A. Azure AI Computer Vision
  • B. Cognitive Search
  • C. Azure AI Document Intelligence
  • D. Azure AI Anomaly Detector

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 1225124 User: p2006 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 06 Dec 2024 06:46 Selected Answer: D Upvotes: 2

https://learn.microsoft.com/en-us/azure/ai-services/anomaly-detector/overview#multivariate-anomaly-detection

Comment 2

ID: 1217587 User: nanaw770 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 24 Nov 2024 17:31 Selected Answer: D Upvotes: 2

D is OK.

Comment 3

ID: 1205022 User: anntv252 Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Fri 01 Nov 2024 13:55 Selected Answer: - Upvotes: 3

Correct
D. predict machinery failures is using Anomaly detector in azure AI service

Comment 4

ID: 1194739 User: sivapolam90 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 13 Oct 2024 09:25 Selected Answer: - Upvotes: 1

D. Azure AI Anomaly Detector

Comment 5

ID: 1165958 User: GHill1982 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Wed 04 Sep 2024 20:07 Selected Answer: D Upvotes: 3

Azure AI Anomaly Detector is an AI service that enables you to monitor and detect anomalies in your time series data with little machine learning knowledge, either batch validation or real-time inference.

Comment 6

ID: 1164378 User: Razvan_C Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Mon 02 Sep 2024 21:44 Selected Answer: D Upvotes: 3

Answer seems to be correct

53. AI-102 Topic 2 Question 23

Sequence
158
Discussion ID
102569
Source URL
https://www.examtopics.com/discussions/microsoft/view/102569-exam-ai-102-topic-2-question-23-discussion/
Posted By
marti_tremblay000
Posted At
March 14, 2023, 12:17 p.m.

Question

You have an app that captures live video of exam candidates.

You need to use the Face service to validate that the subjects of the videos are real people.

What should you do?

  • A. Call the face detection API and retrieve the face rectangle by using the FaceRectangle attribute.
  • B. Call the face detection API repeatedly and check for changes to the FaceAttributes.HeadPose attribute.
  • C. Call the face detection API and use the FaceLandmarks attribute to calculate the distance between pupils.
  • D. Call the face detection API repeatedly and check for changes to the FaceAttributes.Accessories attribute.

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 11 comments Click to expand

Comment 1

ID: 1319884 User: Alan_CA Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Fri 29 Nov 2024 20:19 Selected Answer: B Upvotes: 1

Because the Liveness Detection attribute is not in the list, HeadPose is the right answer

Comment 2

ID: 1235170 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:14 Selected Answer: B Upvotes: 1

I say this answer is B. Please hurry up and transport the meat.

Comment 3

ID: 1214376 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:02 Selected Answer: B Upvotes: 2

FaceAttributes.HeadPose is used this solution.

Comment 4

ID: 1194050 User: 1668f51 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 12 Apr 2024 01:50 Selected Answer: A Upvotes: 3

"You need to use the Face service to validate that the subjects of the videos are real people." Never think too much into these questions. A can detect if it's real or not. Not asking anything else. It's A.

Comment 5

ID: 1039698 User: sl_mslconsulting Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Tue 10 Oct 2023 19:49 Selected Answer: B Upvotes: 1

It can’t be A. If you only try to detect the faces without tracking their position over time, the system can be easily fooled in this very specific scenario.

Comment 6

ID: 936167 User: msdfqwerfewf Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 07:31 Selected Answer: A Upvotes: 3

Option A is more appropriate for validating the presence of real people in the live video. By calling the face detection API and retrieving the face rectangle using the FaceRectangle attribute, you can detect and locate faces within the video frames. This helps in confirming the presence of actual human faces in the captured video.

Option B, on the other hand, suggests repeatedly calling the face detection API and checking for changes to the FaceAttributes.HeadPose attribute. While head pose analysis can provide information about the orientation of detected faces, it may not be the most reliable approach for validating the authenticity of the subjects as real people. Checking for changes in head pose alone may not be sufficient to differentiate between real people and other forms of visual representations.

Comment 7

ID: 920678 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sun 11 Jun 2023 14:47 Selected Answer: B Upvotes: 2

B: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/use-headpose#detect-head-gestures

Comment 8

ID: 919732 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sat 10 Jun 2023 03:55 Selected Answer: - Upvotes: 1

B: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/use-headpose#detect-head-gestures

Comment 9

ID: 890432 User: Rob77 Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Sat 06 May 2023 05:12 Selected Answer: - Upvotes: 3

B https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/use-headpose#detect-head-gestures
"Liveness detection is the task of determining that a subject is a real person and not an image or video representation"

Comment 10

ID: 872308 User: Mike19D Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Mon 17 Apr 2023 05:50 Selected Answer: B Upvotes: 3

The Answer is B. A could be a still picture

Comment 11

ID: 838775 User: marti_tremblay000 Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Tue 14 Mar 2023 12:17 Selected Answer: B Upvotes: 2

The answer is B
Detect head gestures
You can detect head gestures like nodding and head shaking by tracking HeadPose changes in real time. You can use this feature as a custom liveness detector.
Reference https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/use-headpose

54. AI-102 Topic 1 Question 17

Sequence
160
Discussion ID
75106
Source URL
https://www.examtopics.com/discussions/microsoft/view/75106-exam-ai-102-topic-1-question-17-discussion/
Posted By
satishk4u
Posted At
May 3, 2022, 1:35 p.m.

Question

You plan to perform predictive maintenance.
You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets.
You need to identify unusual values in each time series to help predict machinery failures.
Which Azure service should you use?

  • A. Anomaly Detector
  • B. Cognitive Search
  • C. Form Recognizer
  • D. Custom Vision

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 9 comments Click to expand

Comment 1

ID: 1319411 User: Alan_CA Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 19:37 Selected Answer: A Upvotes: 1

I don't think this question is still in the exam :
Starting on the 20th of September, 2023 you won’t be able to create new Anomaly Detector resources. The Anomaly Detector service is being retired on the 1st of October, 2026.

Comment 2

ID: 1213805 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 15:28 Selected Answer: A Upvotes: 1

A is right.

Comment 3

ID: 1193433 User: CDL_Learner Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 11 Apr 2024 06:44 Selected Answer: A Upvotes: 1

The best Azure service to use in this scenario is A. Anomaly Detector1.

Comment 4

ID: 857536 User: josemiguelch Badges: - Relative Date: 2 years, 11 months ago Absolute Date: Sat 01 Apr 2023 04:28 Selected Answer: - Upvotes: 1

Azure Anomaly Detector is the recommended Azure service to use for this scenario.

Comment 5

ID: 778432 User: KingChuang Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Tue 17 Jan 2023 02:10 Selected Answer: - Upvotes: 4

on my exam (2023-01-16 Passed)

My Answer:A

Comment 6

ID: 750062 User: kml2003 Badges: - Relative Date: 3 years, 2 months ago Absolute Date: Mon 19 Dec 2022 18:04 Selected Answer: A Upvotes: 2

Anomaly detection

Comment 7

ID: 631382 User: Eltooth Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Thu 14 Jul 2022 15:05 Selected Answer: A Upvotes: 1

A is correct answer : Anomaly Detector.

Comment 8

ID: 626644 User: happychuks Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Sun 03 Jul 2022 18:21 Selected Answer: A Upvotes: 2

https://azure.microsoft.com/en-us/services/cognitive-services/anomaly-detector/

Comment 9

ID: 596334 User: satishk4u Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Tue 03 May 2022 13:35 Selected Answer: - Upvotes: 1

Was on exam on 03-May-2022

55. AI-102 Topic 2 Question 18

Sequence
161
Discussion ID
81861
Source URL
https://www.examtopics.com/discussions/microsoft/view/81861-exam-ai-102-topic-2-question-18-discussion/
Posted By
michasacuer
Posted At
Sept. 12, 2022, 7:44 p.m.

Question

You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code.
image
During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete.
You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Remove the operation_id parameter.
  • B. Add code to verify the read_results.status value.
  • C. Add code to verify the status of the read_operation_location value.
  • D. Wrap the call to get_read_result within a loop that contains a delay.

Suggested Answer

BD

Answer Description Click to expand


Community Answer Votes

Comments 8 comments Click to expand

Comment 1

ID: 839783 User: RAN_L Badges: Highly Voted Relative Date: 2 years, 5 months ago Absolute Date: Fri 15 Sep 2023 10:39 Selected Answer: BD Upvotes: 5

B. Add code to verify the read_results.status value.
D. Wrap the call to get_read_result within a loop that contains a delay.

Explanation:

In order to prevent the GetReadResultAsync method from proceeding until the read operation is complete, we need to check the status of the read operation and wait until it's completed. To do this, we can add code to verify the status of the read_results.status value. If the status is not "succeeded", we can add a delay and then retry the operation until it's complete. This can be achieved by wrapping the call to get_read_result within a loop that contains a delay.

Removing the operation_id parameter or adding code to verify the status of the read_operation_location value will not solve the issue of waiting for the read operation to complete before proceeding with the GetReadResultAsync method.

Comment 2

ID: 1220236 User: hatanaoki Badges: Most Recent Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 15:40 Selected Answer: BD Upvotes: 1

B and D are right answer.

Comment 3

ID: 1214389 User: takaimomoGcup Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 20 Nov 2024 17:11 Selected Answer: BD Upvotes: 2

memorize "read_results" words. So B and D.

Comment 4

ID: 1214388 User: takaimomoGcup Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 20 Nov 2024 17:10 Selected Answer: BD Upvotes: 1

B and D.

Comment 5

ID: 1135562 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 30 Jul 2024 07:06 Selected Answer: - Upvotes: 4

where's GetReadResultAsync in the code? looks like C# method not python code

Comment 6

ID: 936158 User: Pixelmate Badges: - Relative Date: 2 years, 2 months ago Absolute Date: Thu 28 Dec 2023 08:24 Selected Answer: - Upvotes: 3

This was on exam 28/06

Comment 7

ID: 717329 User: halfway Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Sat 13 May 2023 13:34 Selected Answer: BD Upvotes: 1

Duplicated with Topic 2, Question 2.

Comment 8

ID: 667294 User: michasacuer Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Sun 12 Mar 2023 20:44 Selected Answer: BD Upvotes: 1

Correct

56. AI-102 Topic 2 Question 27

Sequence
162
Discussion ID
108600
Source URL
https://www.examtopics.com/discussions/microsoft/view/108600-exam-ai-102-topic-2-question-27-discussion/
Posted By
Rob77
Posted At
May 6, 2023, 5:29 a.m.

Question

You have a mobile app that manages printed forms.

You need the app to send images of the forms directly to Forms Recognizer to extract relevant information. For compliance reasons, the image files must not be stored in the cloud.

In which format should you send the images to the Form Recognizer API endpoint?

  • A. raw image binary
  • B. form URL encoded
  • C. JSON

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 9 comments Click to expand

Comment 1

ID: 1056561 User: trashbox Badges: Highly Voted Relative Date: 1 year, 10 months ago Absolute Date: Mon 29 Apr 2024 04:04 Selected Answer: - Upvotes: 5

Appeared on Oct/29/2023

Comment 2

ID: 1220328 User: taiwan_is_not_china Badges: Most Recent Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 17:15 Selected Answer: A Upvotes: 1

The answer to this question is A.

Comment 3

ID: 1218285 User: nanaw770 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Mon 25 Nov 2024 14:54 Selected Answer: A Upvotes: 1

A is right.

Comment 4

ID: 1214368 User: takaimomoGcup Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Wed 20 Nov 2024 16:53 Selected Answer: A Upvotes: 1

A is right answer.

Comment 5

ID: 1108960 User: ankitdhir Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 29 Jun 2024 17:47 Selected Answer: A Upvotes: 2

its correct

Comment 6

ID: 995288 User: M25 Badges: - Relative Date: 2 years ago Absolute Date: Thu 29 Feb 2024 19:36 Selected Answer: A Upvotes: 3

A. raw image binary
https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync
Request body: Document containing the receipt image(s) to be analyzed. The POST body should be the raw image binary, or the image URL in JSON.

https://ittichaicham.com/2020/03/call-azure-form-recognizer-api-on-sharepoint-document-image-url-in-power-automate/
Power Automate (formerly Microsoft Flow) can call Azure Form Recognizer via the connector. Since Power Automate is a cloud solution, the natural choice is to use the image URL. This should work fine if the URL is accessible to the public or requires no authentication. Unfortunately, the company’s SharePoint URL, most of the time, is not.
To solve this, we can add another flow step to move the SharePoint file to where it is accessible, or, better, instead of using file URL, we can pass binary content in the Form Recognizer API.

Comment 7

ID: 915207 User: ziggy1117 Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Tue 05 Dec 2023 11:09 Selected Answer: - Upvotes: 1

Should be A. When you send images to the endpoint, the images don't get stored anywhere.

Comment 8

ID: 898069 User: hens Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Wed 15 Nov 2023 09:06 Selected Answer: - Upvotes: 3

chat gpt "To send images directly to Forms Recognizer and extract relevant information without storing the image files in the cloud, you should use the raw image binary format."

Comment 9

ID: 890448 User: Rob77 Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Mon 06 Nov 2023 06:29 Selected Answer: - Upvotes: 1

Looks like URL (not sure about the "encoded" part though!)
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/v3-migration-guide?view=form-recog-3.0.0#analyze-request-body

57. AI-102 Topic 4 Question 17

Sequence
163
Discussion ID
135946
Source URL
https://www.examtopics.com/discussions/microsoft/view/135946-exam-ai-102-topic-4-question-17-discussion/
Posted By
-
Posted At
March 13, 2024, 6:06 p.m.

Question

HOTSPOT -

You have an app named App1 that uses Azure AI Document Intelligence to analyze medical records and provide pharmaceutical dosage recommendations for patients.

You send a request to App1 and receive the following response.

image

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 12 comments Click to expand

Comment 1

ID: 1186863 User: Murtuza Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Sun 31 Mar 2024 17:26 Selected Answer: - Upvotes: 13

The form elements were recognized with greater than 70 percent confidence: No. The fields object in the response is empty, which suggests that no form elements were recognized. Therefore, we cannot say that they were recognized with greater than 70 percent confidence.

Comment 2

ID: 1186860 User: Murtuza Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Sun 31 Mar 2024 17:14 Selected Answer: - Upvotes: 8

The chosen model is suitable for the intended use case: No. The model used here is “prebuilt-health InsuranceCard.us”, which is designed to extract information from health insurance cards in the US. However, the intended use case is to analyze medical records and provide pharmaceutical dosage recommendations for patients. A more suitable model would be one specifically trained for medical record analysis.
The text content was recognized with greater than 70 percent confidence: Yes. The confidence scores for the recognized words “Blood” (0.766), “Pressure” (0.716), and “118/72” (0.761) are all greater than 70 percent.

Comment 2.1

ID: 1284186 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:16 Selected Answer: - Upvotes: 1

Why do you think the health insurance card is not "analyze medical record". I know but it is about english language interpretation. I can read the insurance card to get the details of the user and then use that to specify the dosage.

But yes, I will go for No as well as probably they mean another model

Comment 2.1.1

ID: 1284187 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:17 Selected Answer: - Upvotes: 1

What does Microsoft mean by form-elements??? I just cannot figure it out

Comment 2.1.1.1

ID: 1319299 User: nastolgia Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Thu 28 Nov 2024 16:08 Selected Answer: - Upvotes: 3

no worries, we dont have "form" element in the response.
NO, yes, NO

Comment 3

ID: 1304308 User: e41f7aa Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 07:29 Selected Answer: - Upvotes: 6

No, yes. No

Comment 4

ID: 1277528 User: djsoyboi Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 03 Sep 2024 13:44 Selected Answer: - Upvotes: 1

Claude said Based on the information provided in the JSON response from Image 1, I'll analyze each statement and determine whether it's true or false:

The chosen model is suitable for the intended use case.
Answer: Yes
Explanation: The model used is "prebuilt-healthInsuranceCard.us", which appears suitable for analyzing medical records as requested in the scenario.
The text content was recognized with greater than 70 percent confidence.
Answer: Yes
Explanation: All recognized words ("Blood", "Pressure", "118/72") have confidence scores above 70%:


"Blood": 0.766 (76.6%)
"Pressure": 0.716 (71.6%)
"118/72": 0.761 (76.1%)


The form elements were recognized with greater than 70 percent confidence.
Answer: Yes
Explanation: While we don't have specific information about form elements, the overall document confidence is 1 (100%), as shown in the "confidence": 1 field near the bottom of the JSON.

Comment 5

ID: 1264997 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 07:10 Selected Answer: - Upvotes: 3

It's N-Y-N, no doubts

Comment 6

ID: 1247911 User: krzkrzkra Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 14 Jul 2024 20:06 Selected Answer: - Upvotes: 2

N
Y
No

Comment 7

ID: 1240510 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 02 Jul 2024 05:28 Selected Answer: - Upvotes: 1

N - Y - N, no form elements

Comment 8

ID: 1236100 User: aiLearner121 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Mon 24 Jun 2024 04:40 Selected Answer: - Upvotes: 2

no
yes
no

Comment 9

ID: 1218687 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 26 May 2024 04:44 Selected Answer: - Upvotes: 3

No
Yes
Yes

58. AI-102 Topic 1 Question 19

Sequence
165
Discussion ID
62493
Source URL
https://www.examtopics.com/discussions/microsoft/view/62493-exam-ai-102-topic-1-question-19-discussion/
Posted By
htolajide
Posted At
Sept. 21, 2021, 4:31 p.m.

Question

HOTSPOT -
You are developing an internet-based training solution for remote learners.
Your company identifies that during the training, some learners leave their desk for long periods or become distracted.
You need to use a video and audio feed from each learner's computer to detect whether the learner is present and paying attention. The solution must minimize development effort and identify each learner.
Which Azure Cognitive Services service should you use for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/what-are-cognitive-services

Comments 10 comments Click to expand

Comment 1

ID: 448959 User: htolajide Badges: Highly Voted Relative Date: 3 years, 11 months ago Absolute Date: Mon 21 Mar 2022 17:31 Selected Answer: - Upvotes: 28

The answer is correct

Comment 2

ID: 1219574 User: takaimomoGcup Badges: Most Recent Relative Date: 1 year, 3 months ago Absolute Date: Wed 27 Nov 2024 16:19 Selected Answer: - Upvotes: 1

Face Face Speech

Comment 3

ID: 1217637 User: nanaw770 Badges: - Relative Date: 1 year, 3 months ago Absolute Date: Sun 24 Nov 2024 18:07 Selected Answer: - Upvotes: 1

Face Face Speech

Comment 4

ID: 1055014 User: trashbox Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sat 27 Apr 2024 02:48 Selected Answer: - Upvotes: 3

The answer is correct. Face, Face, and Speech. What a terrible application :(

Comment 5

ID: 631384 User: Eltooth Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Sat 14 Jan 2023 16:08 Selected Answer: - Upvotes: 2

Face
Face
Speech

Comment 6

ID: 543974 User: Coderhbti Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Tue 09 Aug 2022 17:49 Selected Answer: - Upvotes: 3

From Video feed - Face
Facial Expression from - Face
Audio Feed is - Speech

Comment 7

ID: 517007 User: sumanshu Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Mon 04 Jul 2022 21:33 Selected Answer: - Upvotes: 3

From Video feed - Face
Facial Expression from - Face
Audio Feed is - Speech

Comment 8

ID: 473376 User: mikegsm Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Fri 06 May 2022 10:18 Selected Answer: - Upvotes: 1

Correct

Comment 9

ID: 466660 User: Adedoyin_Simeon Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Sat 23 Apr 2022 18:19 Selected Answer: - Upvotes: 2

The answer is correct

Comment 10

ID: 449090 User: RomanXXX Badges: - Relative Date: 3 years, 11 months ago Absolute Date: Mon 21 Mar 2022 21:08 Selected Answer: - Upvotes: 3

Seems ok

59. AI-102 Topic 10 Question 1

Sequence
171
Discussion ID
77636
Source URL
https://www.examtopics.com/discussions/microsoft/view/77636-exam-ai-102-topic-10-question-1-discussion/
Posted By
Eltooth
Posted At
July 20, 2022, 1:58 p.m.

Question

DRAG DROP -
You are developing a solution for the Management-Bookkeepers group to meet the document processing requirements. The solution must contain the following components:
✑ A From Recognizer resource
✑ An Azure web app that hosts the Form Recognizer sample labeling tool
The Management-Bookkeepers group needs to create a custom table extractor by using the sample labeling tool.
Which three actions should the Management-Bookkeepers group perform in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand

Step 1: Create a new project and load sample documents
Create a new project. Projects store your configurations and settings.
Step 2: Label the sample documents
When you create or open a project, the main tag editor window opens.
Step 3: Train a custom model.
Finally, train a custom model.
Reference:
https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/label-tool

Comments 9 comments Click to expand

Comment 1

ID: 634023 User: Eltooth Badges: Highly Voted Relative Date: 3 years, 7 months ago Absolute Date: Wed 20 Jul 2022 13:58 Selected Answer: - Upvotes: 22

Answer is correct.
Create a project
Label the sample docs
Train the model

Comment 1.1

ID: 636523 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 25 Jul 2022 09:35 Selected Answer: - Upvotes: 4

https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/label-tool#create-a-new-project

Comment 2

ID: 1316782 User: 3fbc31b Badges: Most Recent Relative Date: 1 year, 3 months ago Absolute Date: Sat 23 Nov 2024 20:14 Selected Answer: - Upvotes: 2

The answer is correct. The "label tool" is only available in the "custom model" model.

Comment 3

ID: 1267912 User: JuneRain Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 18 Aug 2024 04:56 Selected Answer: - Upvotes: 2

This question was in the test I took on August 2024

Comment 4

ID: 1236637 User: JacobZ Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 25 Jun 2024 01:09 Selected Answer: - Upvotes: 2

Got this case study in the exam, Jun 2024.

Comment 5

ID: 1214418 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:32 Selected Answer: - Upvotes: 1

Is this question still available on May 21, 2024?

Comment 5.1

ID: 1273766 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Wed 28 Aug 2024 06:17 Selected Answer: - Upvotes: 1

Ill report soon. Document Intelligence Studio makes the app obsolete so i wonder it this is still relevant.

Comment 6

ID: 1145898 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 10 Feb 2024 03:34 Selected Answer: - Upvotes: 2

The requireemnt has nothing to do with composite model given answer is correct

Comment 7

ID: 1102015 User: Gvalli Badges: - Relative Date: 2 years, 2 months ago Absolute Date: Thu 21 Dec 2023 00:15 Selected Answer: - Upvotes: 3

A modified version of this was in the exam today.

60. AI-102 Topic 2 Question 37

Sequence
175
Discussion ID
135013
Source URL
https://www.examtopics.com/discussions/microsoft/view/135013-exam-ai-102-topic-2-question-37-discussion/
Posted By
Harry300
Posted At
March 1, 2024, 6:16 p.m.

Question

DRAG DROP -

You have an app that uses Azure AI and a custom trained classifier to identify products in images.

You need to add new products to the classifier. The solution must meet the following requirements:

• Minimize how long it takes to add the products.
• Minimize development effort.

Which five actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Suggested Answer

image
Answer Description Click to expand


Comments 17 comments Click to expand

Comment 1

ID: 1163639 User: Harry300 Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Fri 01 Mar 2024 18:16 Selected Answer: - Upvotes: 33

First step should be Custom Vision.
Then Upload sample images / Label / Retrain / Publish
Custom classifiers have to go through custom vision, obviously

Comment 1.1

ID: 1163860 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 02:32 Selected Answer: - Upvotes: 3

As per this https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-model-customization

You can train a custom model using either the Custom Vision service or the Image Analysis 4.0 service with model customization.

Comment 2

ID: 1312794 User: 3fbc31b Badges: Most Recent Relative Date: 1 year, 3 months ago Absolute Date: Fri 15 Nov 2024 20:29 Selected Answer: - Upvotes: 1

Given that the prerequisite is to minimize development effort, the first step should be to use the Custom Vision Portal, not Visual Studio, a that requires more development effort than a GUI, such as the Vision Portal.

Comment 3

ID: 1282668 User: 19089ba Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 12 Sep 2024 15:56 Selected Answer: - Upvotes: 2

I think from now on the first step shold be Vision Studio, which if I get correctly is the new Custom Vision Studio
https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/concepts/compare-alternatives

Comment 4

ID: 1248497 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 15 Jul 2024 19:56 Selected Answer: - Upvotes: 2

From the Custom Vision portal, open the project.
Upload sample images of the new products.
Label the sample images.
Retrain the model.
Publish the model.

Comment 5

ID: 1228868 User: mon2002 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 12 Jun 2024 08:52 Selected Answer: - Upvotes: 4

From the Custom Vision Portal, Open the Project.
Upload Sample images of the new products.
Label the Sample images.
Retrain the model.
Publish the model.

Comment 6

ID: 1218241 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:21 Selected Answer: - Upvotes: 1

1. From the Custom Vision portal, open the project
2. Label
3. Upload
4 .Retrain
5. Publish

Comment 6.1

ID: 1294709 User: aa18a1a Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 08 Oct 2024 14:26 Selected Answer: - Upvotes: 2

You must upload the images prior to labeling.
Not Microsoft documentation but here's a pretty detailed walkthrough:
https://blog.roboflow.com/how-to-label-azure-custom-vision/#:~:text=Label%20Computer%20Vision%20Data%20in%20Azure%20Custom%20Vision,...%205%20Step%20%234%3A%20Label%20an%20Image%20

Comment 7

ID: 1213816 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 15:41 Selected Answer: - Upvotes: 1

Will this question be on the actual exam?

Comment 7.1

ID: 1274905 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Fri 30 Aug 2024 06:29 Selected Answer: - Upvotes: 1

yes it was

Comment 8

ID: 1210048 User: aks_exam Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Sun 12 May 2024 03:17 Selected Answer: - Upvotes: 3

So I will choice,
1. From the Custom Vision portal, open the project
2. Upload sample images of the new products
3. Label the samples images
4. Retrain the model
5. Publish the model

labelling after uploading sample images.

Comment 9

ID: 1185710 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 29 Mar 2024 23:07 Selected Answer: - Upvotes: 1

To add new products to the classifier while minimizing time and development effort, you should perform the following actions in sequence:

From the Custom Vision portal, open the project.
Upload sample images of the new products.
Label the sample images.
Retrain the model.
Publish the model.
This sequence ensures that the new product images are properly added, labeled, and incorporated into the existing model, and that the updated model is made available for use by your application.

Comment 10

ID: 1183221 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 26 Mar 2024 11:41 Selected Answer: - Upvotes: 1

Final Answer:

1. From the Custom Vision portal, open the project
2. Label the samples images
3. Upload sample images of the new products
4 . Retrain the model
5. Publish the model

Comment 11

ID: 1175963 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 17 Mar 2024 18:00 Selected Answer: - Upvotes: 2

4) Retrain the Model:
Trigger the retraining process for your custom classifier. This step involves:
Using the labeled samples to train the model.
Fine-tuning the existing classifier with the new data.

5) Publish the Model:
Once the retraining is complete and the model performs well on validation data, publish the updated classifier.
The published model will be ready for inference in your application, allowing it to identify the new products.

Comment 12

ID: 1175962 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 17 Mar 2024 18:00 Selected Answer: - Upvotes: 2

Your proposed sequence of actions for adding new products to the classifier is a good start! Let’s refine it a bit to ensure it aligns with best practices:

1) Open the Custom Vision Project:
Begin by accessing your project in the Custom Vision portal. This is where you’ll manage your custom-trained classifier.

2) Label the Sample Images:
Next, label the sample images you’ve collected for the new products. Assign appropriate tags or classes to each image based on the product category.
Proper labeling is crucial for effective training.

3) Upload Sample Images:
Upload the labeled sample images to your project. These images will serve as the training data for your classifier.
Make sure you have a diverse set of samples to represent different variations of the new products.

Comment 13

ID: 1166337 User: arcameon Badges: - Relative Date: 2 years ago Absolute Date: Tue 05 Mar 2024 10:48 Selected Answer: - Upvotes: 3

According to chat GPT, the correct sequence is the following :
1. From the Custom Vision portal, open the project
2. Label the samples images
3. Upload sample images of the new products
4 . Retrain the model
5. Publish the model

Labeling the sample images before uploading them is crucial because it helps in structuring and organizing the dataset appropriately.

Comment 13.1

ID: 1183224 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 26 Mar 2024 11:43 Selected Answer: - Upvotes: 2

2. Label the samples images
3. Upload sample images of the new products
Wrong, How you can label before upload so it should be Upload and then label

61. AI-102 Topic 4 Question 26

Sequence
180
Discussion ID
150069
Source URL
https://www.examtopics.com/discussions/microsoft/view/150069-exam-ai-102-topic-4-question-26-discussion/
Posted By
a8da4af
Posted At
Oct. 22, 2024, 9:29 p.m.

Question

You have an Azure AI Search resource named Search1.

You have an app named App1 that uses Search1 to index content.

You need to add a custom skill to App1 to ensure that the app can recognize and retrieve properties from invoices by using Search1.

What should you include in the solution?

  • A. Azure AI Immersive Reader
  • B. Azure OpenAI
  • C. Azure AI Document Intelligence
  • D. Azure AI Custom Vision

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 3 comments Click to expand

Comment 1

ID: 1304483 User: Christian_garcia_martin Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 14:58 Selected Answer: - Upvotes: 5

Document Intelligence , formerly Form Recognizer

Comment 2

ID: 1308491 User: jolimon Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Thu 07 Nov 2024 18:34 Selected Answer: C Upvotes: 2

C, no doubts

Comment 3

ID: 1301689 User: a8da4af Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 22 Oct 2024 21:29 Selected Answer: - Upvotes: 4


The correct answer is:

C. Azure AI Document Intelligence

Reasoning: To ensure the app can recognize and retrieve properties from invoices using Azure AI Search, you need a service that specializes in analyzing and extracting data from structured documents like invoices. Azure AI Document Intelligence (formerly known as Form Recognizer) provides pre-built models for processing invoices, extracting fields such as invoice numbers, dates, totals, and more.

You can integrate Azure AI Document Intelligence with Azure Cognitive Search by adding a custom skill to the search pipeline, allowing the app to extract and index specific properties from invoices. This ensures that App1 can retrieve relevant content from the invoices efficiently.

62. AI-102 Topic 4 Question 30

Sequence
181
Discussion ID
149783
Source URL
https://www.examtopics.com/discussions/microsoft/view/149783-exam-ai-102-topic-4-question-30-discussion/
Posted By
cheetah313
Posted At
Oct. 19, 2024, 11:23 a.m.

Question

HOTSPOT -

You have an Azure subscription that contains an Azure AI Document Intelligence resource named DI1.

You build an app named App1 that analyzes PDF files for handwritten content by using DI1.

You need to ensure that App1 will recognize the handwritten content.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 4 comments Click to expand

Comment 1

ID: 1299966 User: cheetah313 Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Sat 19 Oct 2024 11:23 Selected Answer: - Upvotes: 14

I Think field one should be "prebuilt-read" as this is a model suitable for OCR with handwritten documents according to ChatGPT.

Comment 1.1

ID: 1304296 User: Christian_garcia_martin Badges: - Relative Date: 1 year, 4 months ago Absolute Date: Tue 29 Oct 2024 06:09 Selected Answer: - Upvotes: 4

it's the same question that question 18 of topic 4 , but this time is in pyton instead C# , answers are the same :
prebuilt-document
0.75

Comment 2

ID: 1306595 User: Homi Badges: Highly Voted Relative Date: 1 year, 4 months ago Absolute Date: Sun 03 Nov 2024 19:11 Selected Answer: - Upvotes: 8

Prebuilt-read
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/prebuilt/read?view=doc-intel-4.0.0&tabs=sample-code

Comment 3

ID: 1308492 User: jolimon Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Thu 07 Nov 2024 18:38 Selected Answer: - Upvotes: 3

Prebuilt-read
0.75

63. AI-102 Topic 1 Question 46

Sequence
191
Discussion ID
102672
Source URL
https://www.examtopics.com/discussions/microsoft/view/102672-exam-ai-102-topic-1-question-46-discussion/
Posted By
RAN_L
Posted At
March 15, 2023, 10:38 a.m.

Question

You have an app that analyzes images by using the Computer Vision API.

You need to configure the app to provide an output for users who are vision impaired. The solution must provide the output in complete sentences.

Which API call should you perform?

  • A. readInStreamAsync
  • B. analyzeImagesByDomainInStreamAsync
  • C. tagImageInStreamAsync
  • D. describeImageInStreamAsync

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 8 comments Click to expand

Comment 1

ID: 839744 User: RAN_L Badges: Highly Voted Relative Date: 2 years, 12 months ago Absolute Date: Wed 15 Mar 2023 10:38 Selected Answer: D Upvotes: 22

The API call you should perform to provide an output in complete sentences for users who are vision impaired is describeImageInStreamAsync.

The describe feature of the Computer Vision API generates a human-readable sentence to describe the contents of an image. This is particularly useful for accessibility purposes, as it allows visually impaired users to understand what is in an image without needing to see it. The describe feature can also be customized to provide additional details or context, if desired.

Therefore, the correct answer is D. describeImageInStreamAsync.

Comment 2

ID: 1297829 User: Sujeeth Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Tue 15 Oct 2024 00:01 Selected Answer: - Upvotes: 2

D. describeImageInStreamAsync.
The describeImageInStreamAsync method in the Computer Vision API is designed to generate descriptions of an image in complete sentences, which is ideal for providing information to vision-impaired users. It outputs human-readable descriptions that convey the content of the image in a way that can be converted to audio or other accessible formats.
A. readInStreamAsync is used for extracting text from images (OCR).
B. analyzeImagesByDomainInStreamAsync is for domain-specific image analysis (like celebrities or landmarks).
C. tagImageInStreamAsync provides tags for the image but not in complete sentences, which is less useful for accessibility purposes.

Comment 3

ID: 1235157 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:04 Selected Answer: D Upvotes: 1

see if you can spot this D.

Comment 4

ID: 1217613 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:48 Selected Answer: D Upvotes: 1

D. describeImageInStreamAsync is right answer.

Comment 5

ID: 1194710 User: sivapolam90 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 13 Apr 2024 08:56 Selected Answer: D Upvotes: 1

Answer is D. describeImageInStreamAsync.

Comment 6

ID: 1147905 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Mon 12 Feb 2024 09:57 Selected Answer: D Upvotes: 1

The correct answer is D. describeImageInStreamAsync.
To configure an app that provides output in complete sentences for users who are vision impaired, using the Computer Vision API, you should use the describeImageInStreamAsync call. This API call analyzes an image and provides a description in natural, human-readable language.

Comment 7

ID: 1056567 User: trashbox Badges: - Relative Date: 2 years, 4 months ago Absolute Date: Sun 29 Oct 2023 05:09 Selected Answer: - Upvotes: 3

Appeared on Oct/29/2023.

Comment 8

ID: 984260 User: james2033 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Fri 18 Aug 2023 09:46 Selected Answer: D Upvotes: 2

See sample source code

using (Stream imageStream = File.OpenRead(imagePath)) {
ImageDescription descriptions = await computerVision.DescribeImageInStreamAsync(imageStream);
Console.WriteLine(imagePath);
DisplayDescriptions(descriptions);
}

at https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/samples/ComputerVision/DescribeImage/Program.cs#L75C20-L75C20

64. AI-102 Topic 2 Question 35

Sequence
194
Discussion ID
134935
Source URL
https://www.examtopics.com/discussions/microsoft/view/134935-exam-ai-102-topic-2-question-35-discussion/
Posted By
audlindr
Posted At
Feb. 29, 2024, 7:22 p.m.

Question

You are building an app that will share user images.

You need to configure the app to perform the following actions when a user uploads an image:

• Categorize the image as either a photograph or a drawing.
• Generate a caption for the image.

The solution must minimize development effort.

Which two services should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

  • A. object detection in Azure AI Computer Vision
  • B. content tags in Azure AI Computer Vision
  • C. image descriptions in Azure AI Computer Vision
  • D. image type detection in Azure AI Computer Vision
  • E. image classification in Azure AI Custom Vision

Suggested Answer

CD

Answer Description Click to expand


Community Answer Votes

Comments 10 comments Click to expand

Comment 1

ID: 1162900 User: audlindr Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Thu 29 Feb 2024 19:22 Selected Answer: CD Upvotes: 16

I think Categorize the image as either a photograph or a drawing should be Image type detection
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-detecting-image-types

Image Categorization doesn't identify image as photograph or drawing: See this: https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-categorizing-images

Captions are generated using image descriptions in V3.2. However in V4.0 it is image captions
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-describing-images

Comment 2

ID: 1297462 User: 4371883 Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Mon 14 Oct 2024 12:35 Selected Answer: - Upvotes: 1

this is in Oct 2024 exam

Comment 3

ID: 1235166 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:11 Selected Answer: CD Upvotes: 1

CD is the answer.

Comment 4

ID: 1218251 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:29 Selected Answer: CD Upvotes: 1

image descriptions and image type detection.

Comment 5

ID: 1213818 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 15:44 Selected Answer: CD Upvotes: 1

It must be C and D.

Comment 6

ID: 1202825 User: Jimmy1017 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Fri 26 Apr 2024 23:38 Selected Answer: - Upvotes: 1

A. Object detection in Azure AI Computer Vision
C. Image descriptions in Azure AI Computer Vision

Explanation:

Object Detection (option A): This service in Azure AI Computer Vision can be used to detect objects within an image. By analyzing the content of the image, it can identify whether the image contains elements typically found in photographs (e.g., people, landscapes) or drawings (e.g., sketches, illustrations). This information can help categorize the image accordingly.
Image Descriptions (option C): Azure AI Computer Vision can generate descriptions for images, providing textual summaries of the content. These descriptions can include details about the objects detected in the image, providing additional context that can aid in categorization and caption generation.

Comment 7

ID: 1189591 User: chandiochan Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 05 Apr 2024 03:00 Selected Answer: CD Upvotes: 1

Must be C & D

Comment 8

ID: 1184283 User: haverner Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 27 Mar 2024 19:21 Selected Answer: CE Upvotes: 3

CE, Categorize instantly becomes Classification

Comment 9

ID: 1164333 User: Harry300 Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 20:04 Selected Answer: CD Upvotes: 1

CD imageType for photograph/drawing

Comment 10

ID: 1163636 User: Harry300 Badges: - Relative Date: 2 years ago Absolute Date: Fri 01 Mar 2024 18:13 Selected Answer: CD Upvotes: 1

Should be CD

65. AI-102 Topic 2 Question 31

Sequence
195
Discussion ID
112135
Source URL
https://www.examtopics.com/discussions/microsoft/view/112135-exam-ai-102-topic-2-question-31-discussion/
Posted By
973b658
Posted At
June 14, 2023, 8:54 a.m.

Question

HOTSPOT
-

You are building a model to detect objects in images.

The performance of the model based on training data is shown in the following exhibit.

image

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 7 comments Click to expand

Comment 1

ID: 935876 User: Pixelmate Badges: Highly Voted Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 00:26 Selected Answer: - Upvotes: 7

Asked in 28/06/2023 exam

Comment 2

ID: 1297459 User: 4371883 Badges: Most Recent Relative Date: 1 year, 4 months ago Absolute Date: Mon 14 Oct 2024 12:33 Selected Answer: - Upvotes: 2

got this in Oct 2024 exam

Comment 3

ID: 1285090 User: mrg998 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 10:16 Selected Answer: - Upvotes: 1

0 & 25

Comment 4

ID: 1225484 User: NagaoShingo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 06 Jun 2024 14:33 Selected Answer: - Upvotes: 1

1. 0
2. 25

Comment 5

ID: 1218259 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:34 Selected Answer: - Upvotes: 1

1. 0
2. 25

Comment 6

ID: 924864 User: Tin_Tin Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Fri 16 Jun 2023 07:08 Selected Answer: - Upvotes: 1

The answer is correct.
See https://learn.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/get-started-build-detector

Comment 7

ID: 922817 User: 973b658 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Wed 14 Jun 2023 08:54 Selected Answer: - Upvotes: 1

It is true.
#1:Precision = 100%
#2:recall = 25%

66. AI-102 Topic 3 Question 65

Sequence
200
Discussion ID
134971
Source URL
https://www.examtopics.com/discussions/microsoft/view/134971-exam-ai-102-topic-3-question-65-discussion/
Posted By
zoha_zohe
Posted At
March 1, 2024, 9:09 a.m.

Question

You have a file share that contains 5,000 images of scanned invoices.

You need to analyze the images. The solution must extract the following data:

• Invoice items
• Sales amounts
• Customer details

What should you use?

  • A. Custom Vision
  • B. Azure AI Computer Vision
  • C. Azure AI Immersive Reader
  • D. Azure AI Document Intelligence

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 10 comments Click to expand

Comment 1

ID: 1295510 User: 4371883 Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Thu 10 Oct 2024 11:30 Selected Answer: D Upvotes: 1

D, because invoice model type can "Extract customer and vendor details".

Comment 2

ID: 1265831 User: anto69 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Wed 14 Aug 2024 17:02 Selected Answer: D Upvotes: 1

I'm pretty sure it's D

Comment 3

ID: 1235182 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:19 Selected Answer: D Upvotes: 2

I say this answer is D. Please hurry up and transport the meat.

Comment 4

ID: 1232321 User: etellez Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Tue 18 Jun 2024 10:23 Selected Answer: - Upvotes: 1

Copilot says

B. Azure AI Computer Vision

Explanation:

Azure AI Computer Vision's Read API is designed to extract printed and handwritten text from images and documents. It uses Optical Character Recognition (OCR) to extract text and structure from your images, which can be used to extract invoice items, sales amounts, and customer details from the scanned invoices.

Comment 5

ID: 1229210 User: reigenchimpo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 12 Jun 2024 16:10 Selected Answer: D Upvotes: 1

You see,

• Invoice items
• Sales amounts
• Customer details

so, we use "Azure AI Document Intelligence ".
D is answer.

Comment 6

ID: 1215764 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 22 May 2024 15:34 Selected Answer: D Upvotes: 1

https://learn.microsoft.com/en-us/azure/ai-services/?view=doc-intel-4.0.0
To deal with this type of question, it is good to understand Microsoft's AI services in a nutshell. It is tough to do so by rote memorization.

Comment 7

ID: 1177778 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 19 Mar 2024 23:00 Selected Answer: - Upvotes: 1

To efficiently extract data from scanned invoices, including invoice items, sales amounts, and customer details, you should use Azure AI Document Intelligence. Therefore, the correct answer is D.

Comment 8

ID: 1170081 User: GHill1982 Badges: - Relative Date: 2 years ago Absolute Date: Sun 10 Mar 2024 07:37 Selected Answer: D Upvotes: 3

D. Azure AI Document Intelligence
This service is specifically designed for tasks like automated invoice processing and can extract key information from documents.

Comment 9

ID: 1168328 User: Razvan_C Badges: - Relative Date: 2 years ago Absolute Date: Thu 07 Mar 2024 21:06 Selected Answer: D Upvotes: 1

Answer seems to be correct

Comment 10

ID: 1163321 User: zoha_zohe Badges: - Relative Date: 2 years ago Absolute Date: Fri 01 Mar 2024 09:09 Selected Answer: - Upvotes: 2

Answer is correct

67. AI-102 Topic 9 Question 2

Sequence
203
Discussion ID
62731
Source URL
https://www.examtopics.com/discussions/microsoft/view/62731-exam-ai-102-topic-9-question-2-discussion/
Posted By
Diem
Posted At
Sept. 26, 2021, 8:42 a.m.

Question

HOTSPOT -
You need to develop code to upload images for the product creation project. The solution must meet the accessibility requirements.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand

Reference:
https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/documentation-samples/quickstarts/ComputerVision/Program.cs

Comments 5 comments Click to expand

Comment 1

ID: 1287416 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sat 21 Sep 2024 19:28 Selected Answer: - Upvotes: 1

Microsoft employees has bad naming convention. So they try to trick you with that. String image? Nope, I will not write it but it should have been String imageUrl. Who wants to remember these api calls in this age of copilots (ignore google and documentations).

Comment 2

ID: 1235765 User: HVardhini Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 11:43 Selected Answer: - Upvotes: 1

The given answer is correct because this is the requirement - All images must have relevant alt text.

Comment 3

ID: 1145897 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 10 Feb 2024 03:14 Selected Answer: - Upvotes: 1

given answer is correct

Comment 4

ID: 599931 User: JTWang Badges: - Relative Date: 3 years, 10 months ago Absolute Date: Wed 11 May 2022 08:15 Selected Answer: - Upvotes: 2

AnalyzeImageSample
https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/samples/ComputerVision/AnalyzeImage/Program.cs

Comment 5

ID: 451695 User: Diem Badges: - Relative Date: 4 years, 5 months ago Absolute Date: Sun 26 Sep 2021 08:42 Selected Answer: - Upvotes: 4

Correct!

68. AI-102 Topic 2 Question 30

Sequence
206
Discussion ID
108602
Source URL
https://www.examtopics.com/discussions/microsoft/view/108602-exam-ai-102-topic-2-question-30-discussion/
Posted By
Rob77
Posted At
May 6, 2023, 5:55 a.m.

Question

DRAG DROP
-

You have a factory that produces cardboard packaging for food products. The factory has intermittent internet connectivity.

The packages are required to include four samples of each product.

You need to build a Custom Vision model that will identify defects in packaging and provide the location of the defects to an operator. The model must ensure that each package contains the four products.

Which project type and domain should you use? To answer, drag the appropriate options to the correct targets. Each option may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 11 comments Click to expand

Comment 1

ID: 933417 User: rveney Badges: Highly Voted Relative Date: 2 years, 8 months ago Absolute Date: Sun 25 Jun 2023 11:26 Selected Answer: - Upvotes: 11

This was on my exam

Comment 2

ID: 935875 User: Pixelmate Badges: Highly Voted Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 00:25 Selected Answer: - Upvotes: 6

Asked in 28/06/2023 exam

Comment 3

ID: 1285089 User: mrg998 Badges: Most Recent Relative Date: 1 year, 5 months ago Absolute Date: Tue 17 Sep 2024 10:15 Selected Answer: - Upvotes: 2

project - object detection because it needs to detect errors.
Domain - compact as it needs to be run on a container offline

Comment 4

ID: 1248478 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 15 Jul 2024 19:28 Selected Answer: - Upvotes: 2

1. Object detection
2. General (compact)

Comment 5

ID: 1234512 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 16:55 Selected Answer: - Upvotes: 1

1. Object detection
2. General(compact)

Comment 6

ID: 1220325 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:13 Selected Answer: - Upvotes: 1

This answer is as follows.
1. Object detection
2. General (compact)

Comment 7

ID: 1218269 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 13:42 Selected Answer: - Upvotes: 1

1. Object detection
2. General(compact)

Comment 8

ID: 1214366 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 15:51 Selected Answer: - Upvotes: 1

Project type is Object detection. Domain is General(compact).

Comment 9

ID: 1152268 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:46 Selected Answer: - Upvotes: 2

lcoally==>mean general(compact)
detect =>object detection

Comment 10

ID: 1132970 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 27 Jan 2024 02:07 Selected Answer: - Upvotes: 4

The factory has intermittent internet connectivity:====> this merans an edge deployment of the model without internet connectivity is needed and then edge model using General(compact) domain suits the demands

Comment 11

ID: 890456 User: Rob77 Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Sat 06 May 2023 05:55 Selected Answer: - Upvotes: 3

Correct - https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/select-domain#compact-domains

69. AI-102 Topic 4 Question 21

Sequence
211
Discussion ID
136704
Source URL
https://www.examtopics.com/discussions/microsoft/view/136704-exam-ai-102-topic-4-question-21-discussion/
Posted By
rober13
Posted At
March 20, 2024, 7:52 a.m.

Question

You are building an app named App1 that will use Azure AI Document Intelligence to extract the following data from scanned documents:

• Shipping address
• Billing address
• Customer ID
• Amount due
• Due date
• Total tax
• Subtotal

You need to identify which model to use for App1. The solution must minimize development effort.

Which model should you use?

  • A. custom extraction model
  • B. contract
  • C. invoice
  • D. general document

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 1284206 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:54 Selected Answer: - Upvotes: 1

This question has nothing to do with AI, but asking me what those list of data elements belong to. What does it prove if I know they are from invoice. In real life I will know it is invoice and select an invoice model

Comment 2

ID: 1243723 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 07 Jul 2024 08:24 Selected Answer: - Upvotes: 1

I think is C

Comment 3

ID: 1229830 User: reigenchimpo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 13 Jun 2024 15:22 Selected Answer: C Upvotes: 1

C is answer.

Comment 4

ID: 1218685 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 26 May 2024 04:39 Selected Answer: C Upvotes: 2

C "invoice" is right answer.

Comment 5

ID: 1180211 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 18:15 Selected Answer: - Upvotes: 1

To extract the specified data from scanned documents with minimal development effort, you should use the invoice model. The invoice model is specifically designed to handle structured information commonly found in invoices, including shipping and billing addresses, customer IDs, amounts due, due dates, total taxes, and subtotal. Therefore, the correct choice for App1 is C. invoice

Comment 6

ID: 1177985 User: rober13 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 20 Mar 2024 07:52 Selected Answer: C Upvotes: 2

Reference: https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept-invoice?view=doc-intel-4.0.0#field-extraction

70. AI-102 Topic 12 Question 1

Sequence
212
Discussion ID
76784
Source URL
https://www.examtopics.com/discussions/microsoft/view/76784-exam-ai-102-topic-12-question-1-discussion/
Posted By
ANIKI51419
Posted At
June 14, 2022, 6:24 p.m.

Question

You need to develop an extract solution for the receipt images. The solution must meet the document processing requirements and the technical requirements.
You upload the receipt images to the Form Recognizer API for analysis, and the API returns the following JSON.
image
Which expression should you use to trigger a manual review of the extracted information by a member of the Consultant-Bookkeeper group?

  • A. documentResults.docType == "prebuilt:receipt"
  • B. documentResults.fields.*.confidence < 0.7
  • C. documentResults.fields.ReceiptType.confidence > 0.7
  • D. documentResults.fields.MerchantName.confidence < 0.7

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 11 comments Click to expand

Comment 1

ID: 632978 User: AusAv Badges: Highly Voted Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 13:12 Selected Answer: B Upvotes: 51

Answer is B as I just did the exam and got 100% for the section :)

Comment 2

ID: 1284579 User: mustafaalhnuty Badges: Most Recent Relative Date: 1 year, 5 months ago Absolute Date: Mon 16 Sep 2024 11:52 Selected Answer: - Upvotes: 1

B 100%
* mean any

Comment 3

ID: 1267914 User: JuneRain Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 18 Aug 2024 04:56 Selected Answer: - Upvotes: 2

This question was in the test I took on August 2024

Comment 4

ID: 1249713 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Wed 17 Jul 2024 17:03 Selected Answer: B Upvotes: 1

For me it's for sure B

Comment 5

ID: 1230485 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 14:23 Selected Answer: B Upvotes: 1

B is the correct answer.

Comment 6

ID: 1214421 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:33 Selected Answer: B Upvotes: 3

Is this question still available on May 21, 2024?

Comment 7

ID: 1145908 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 10 Feb 2024 04:17 Selected Answer: B Upvotes: 2

The question:"AI solution responses must have a confidence score that is equal to or greater than 70 percent.
When the response confidence score of an AI response is lower than 70 percent, the response must be improved by human input."
so it has to be fields.*.confidence >0.7

Comment 8

ID: 889710 User: crunkiNhere Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Fri 05 May 2023 02:17 Selected Answer: B Upvotes: 3

I can't imagine a world where the answer could be anything but B

Comment 9

ID: 634045 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Wed 20 Jul 2022 14:50 Selected Answer: B Upvotes: 2

B is correct answer.

Comment 10

ID: 633035 User: AiEngineerS Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 15:16 Selected Answer: B Upvotes: 2

I think is B.. is that?

Comment 11

ID: 616304 User: ANIKI51419 Badges: - Relative Date: 3 years, 9 months ago Absolute Date: Tue 14 Jun 2022 18:24 Selected Answer: - Upvotes: 3

should be B

71. AI-102 Topic 4 Question 18

Sequence
213
Discussion ID
134940
Source URL
https://www.examtopics.com/discussions/microsoft/view/134940-exam-ai-102-topic-4-question-18-discussion/
Posted By
audlindr
Posted At
Feb. 29, 2024, 8:44 p.m.

Question

HOTSPOT -

You have an Azure subscription that contains an Azure AI Document Intelligence resource named DI1.

You build an app named App1 that analyzes PDF files for handwritten content by using DI1.

You need to ensure that App1 will recognize the handwritten content.

How should you complete the code? To answer, select the appropriate options in the answer area.

image

Suggested Answer

image
Answer Description Click to expand


Comments 18 comments Click to expand

Comment 1

ID: 1184575 User: Mehe323 Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Thu 28 Mar 2024 06:33 Selected Answer: - Upvotes: 18

The first answer should be read:
Read OCR - Extract print and handwritten text including words, locations, and detected languages.

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept-model-overview?view=doc-intel-4.0.0

Comment 2

ID: 1181520 User: AlviraTony Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Sun 24 Mar 2024 13:32 Selected Answer: - Upvotes: 10

Prebuilt-read is correct which can classify the test extracted as handwrittern or printed

Comment 3

ID: 1284197 User: famco Badges: Most Recent Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:34 Selected Answer: - Upvotes: 2

read model supports handwritten (I'm too tired to check if prebuilt-document also supports handwritten. I do not know why not).
https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept-read?view=doc-intel-4.0.0&tabs=sample-code#handwritten-style-for-text-lines

Comment 3.1

ID: 1284198 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:36 Selected Answer: - Upvotes: 2

of course, prebuilt-document also supports it as seen in this code example:
https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet#use-the-prebuilt-general-document-model

Microsoft has no quality or taste.

Comment 3.1.1

ID: 1284199 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:37 Selected Answer: - Upvotes: 1

but considering this code extract is the one that is taken and there are no samples with prebuilt-read, and the Microsoft guy did not see the read one, I will go for prebuilt-document.

Comment 3.1.1.1

ID: 1284201 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:40 Selected Answer: - Upvotes: 1

Nope. That path is also not there. There is an example for the read one as well with handwritten recognized:
https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet#use-the-prebuilt-read-model

Comment 3.1.1.1.1

ID: 1284203 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:45 Selected Answer: - Upvotes: 1

And exactly the same piece of code and in the same page. Now how can I guess which one the great employee from Microsoft saw when creating this wonderful question.

Comment 3.1.1.1.1.1

ID: 1284204 User: famco Badges: - Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:47 Selected Answer: - Upvotes: 1

And why not 0.1 as the answer to the next question. I want all more than 10 percent. Who said I cannot do that?? Such substandard questions

Comment 4

ID: 1265032 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 08:58 Selected Answer: - Upvotes: 1

ChatGPT: prebuilt-read

Comment 5

ID: 1247914 User: krzkrzkra Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 14 Jul 2024 20:08 Selected Answer: - Upvotes: 3

1. "prebuilt-document"
2. 0.75

Comment 6

ID: 1235787 User: rookiee1111 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 23 Jun 2024 12:48 Selected Answer: - Upvotes: 3

prebuilt-read that is the correct one
https://learn.microsoft.com/en-gb/training/modules/use-prebuilt-form-recognizer-models/3-use-general-document-read-layout-models

Comment 7

ID: 1218690 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 26 May 2024 04:54 Selected Answer: - Upvotes: 2

1. "prebuilt-document"
2. 0.75

Comment 8

ID: 1191613 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 08 Apr 2024 15:55 Selected Answer: - Upvotes: 3

The given answers by exam topics is CORRECT

Comment 9

ID: 1180135 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 16:27 Selected Answer: - Upvotes: 1

It appears that the code snippet you’ve provided is in C#. Let’s break it down

You’re iterating through the DocumentStyle objects within the result.Styles

You’re invoking an AnalyzeDocumentOperation from an AnalyzeDocumentFromUriAsync method call. The operation is awaited until completion

For each style, you’re checking if it’s handwritten (based on the IsHandwritten property) and if its confidence level is greater than 0.1

Comment 10

ID: 1162963 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Thu 29 Feb 2024 20:44 Selected Answer: - Upvotes: 3

Reference: https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet#use-the-prebuilt-general-document-model

Comment 10.1

ID: 1164355 User: Harry300 Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 21:06 Selected Answer: - Upvotes: 7

why document and not the read model? the code appears in both examples on that website

Comment 10.1.1

ID: 1273130 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 06:18 Selected Answer: - Upvotes: 1

document inherits from read, but if you only need ocr, read is more sufficient.

Comment 10.2

ID: 1184576 User: Mehe323 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 28 Mar 2024 06:33 Selected Answer: - Upvotes: 3

It is read:

Read OCR - Extract print and handwritten text including words, locations, and detected languages.

https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/concept-read?view=doc-intel-4.0.0

72. AI-102 Topic 4 Question 20

Sequence
214
Discussion ID
135063
Source URL
https://www.examtopics.com/discussions/microsoft/view/135063-exam-ai-102-topic-4-question-20-discussion/
Posted By
audlindr
Posted At
March 2, 2024, 7:02 p.m.

Question

HOTSPOT
-

You have an Azure subscription.

You need to deploy an Azure AI Document Intelligence resource.

How should you complete the Azure Resource Manager (ARM) template? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 5 comments Click to expand

Comment 1

ID: 1181321 User: pwang009 Badges: Highly Voted Relative Date: 1 year, 11 months ago Absolute Date: Sun 24 Mar 2024 05:36 Selected Answer: - Upvotes: 12

Answer is correct. Validated.
1. Create a FormRecognizer resource
2. On the resource screen, click Export Template under automation
3. Check the template created

Comment 2

ID: 1284205 User: famco Badges: Highly Voted Relative Date: 1 year, 5 months ago Absolute Date: Sun 15 Sep 2024 17:52 Selected Answer: - Upvotes: 7

In the era of AI, and in an AI exam, they expect people to remember this useless information

Comment 3

ID: 1265034 User: anto69 Badges: Most Recent Relative Date: 1 year, 7 months ago Absolute Date: Tue 13 Aug 2024 09:01 Selected Answer: - Upvotes: 1

I checked on portal. Given answer is correct

Comment 4

ID: 1180147 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 16:56 Selected Answer: - Upvotes: 2

"kind": "FormRecognizer"

Comment 5

ID: 1180144 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 16:54 Selected Answer: - Upvotes: 2

The template specifies parameters like cognitiveServiceName, location, and sku.
It creates a Cognitive Services account of type CognitiveServices with the specified properties.

73. AI-102 Topic 4 Question 23

Sequence
225
Discussion ID
143494
Source URL
https://www.examtopics.com/discussions/microsoft/view/143494-exam-ai-102-topic-4-question-23-discussion/
Posted By
anto69
Posted At
July 7, 2024, 8:28 a.m.

Question

You are building an app that will process scanned expense claims and extract and label the following data:

• Merchant information
• Time of transaction
• Date of transaction
• Taxes paid
• Total cost

You need to recommend an Azure AI Document Intelligence model for the app. The solution must minimize development effort.

What should you use?

  • A. the prebuilt Read model
  • B. a custom template model
  • C. a custom neural model
  • D. the prebuilt receipt model

Suggested Answer

D

Answer Description Click to expand


Community Answer Votes

Comments 4 comments Click to expand

Comment 1

ID: 1282681 User: mrg998 Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Thu 12 Sep 2024 16:27 Selected Answer: D Upvotes: 1

d for sure

Comment 2

ID: 1273541 User: JakeCallham Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Tue 27 Aug 2024 17:39 Selected Answer: D Upvotes: 1

Obviously D

Comment 3

ID: 1262010 User: anto69 Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Wed 07 Aug 2024 10:56 Selected Answer: D Upvotes: 2

D looks correct, Copilot confirms it

Comment 4

ID: 1243725 User: anto69 Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sun 07 Jul 2024 08:28 Selected Answer: D Upvotes: 1

Looks like D is the correct answer

74. AI-102 Topic 2 Question 14

Sequence
227
Discussion ID
84342
Source URL
https://www.examtopics.com/discussions/microsoft/view/84342-exam-ai-102-topic-2-question-14-discussion/
Posted By
Anulf
Posted At
Oct. 4, 2022, 2:09 p.m.

Question

HOTSPOT -
You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands.
You have the following code segment.
image
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
image

Suggested Answer

image
Answer Description Click to expand

Box 1: Yes -

Box 2: Yes -
Coordinates of a rectangle in the API refer to the top left corner.

Box 3: No -
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection

Comments 22 comments Click to expand

Comment 1

ID: 994985 User: M25 Badges: Highly Voted Relative Date: 2 years, 6 months ago Absolute Date: Thu 31 Aug 2023 11:58 Selected Answer: - Upvotes: 16

Y, Y, N
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/shelf-analyze#bounding-box-api-model
x Left-coordinate of the top left point of the area, in pixels.
y Top-coordinate of the top left point of the area, in pixels.
w Width measured from the top-left point of the area, in pixels.
h Height measured from the top-left point of the area, in pixels.

Comment 2

ID: 717311 User: halfway Badges: Highly Voted Relative Date: 3 years, 3 months ago Absolute Date: Sun 13 Nov 2022 14:02 Selected Answer: - Upvotes: 15

Maybe I take it too literally, but I think the third one is "NO": the response returns Width and Height, which can be used to calculate the coordinates of bottom right corner, but it does not include them directly.

Comment 3

ID: 1281398 User: mrg998 Badges: Most Recent Relative Date: 1 year, 6 months ago Absolute Date: Tue 10 Sep 2024 08:47 Selected Answer: - Upvotes: 1

yes yes no

Comment 4

ID: 1252268 User: EngT Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Sun 21 Jul 2024 08:35 Selected Answer: - Upvotes: 1

I cant see anythig related to corner position in the api.
Then its Y, Y, Y

Comment 4.1

ID: 1252272 User: EngT Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Sun 21 Jul 2024 08:40 Selected Answer: - Upvotes: 2

Nop , based on documentation its top left - so its Y Y N

Comment 5

ID: 1220336 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:21 Selected Answer: - Upvotes: 3

The correct sequence is Yes Yes No.

Comment 6

ID: 1218330 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 14:55 Selected Answer: - Upvotes: 2

This is the same question as Topic2 #29.

Comment 7

ID: 1214384 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:07 Selected Answer: - Upvotes: 3

Yes Yes No

Comment 8

ID: 1214382 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:06 Selected Answer: - Upvotes: 2

Yes Yes No

Comment 9

ID: 1204381 User: michaelmorar Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Tue 30 Apr 2024 09:05 Selected Answer: - Upvotes: 2

The brand_confidence variable is not declared in the snippet. Perhaps they meant brand.confidence? Same with rectangle_x . As it stands, the answers should be N,N,N.

Comment 10

ID: 1182436 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 12:53 Selected Answer: - Upvotes: 2

Final Answer:
Yes
Yes
No

Comment 11

ID: 839770 User: RAN_L Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Wed 15 Mar 2023 11:22 Selected Answer: - Upvotes: 2

The code will return the name of each detected brand with a confidence equal to or higher than 75 percent.
Yes, the code will return the name of each detected brand with a confidence equal to or higher than 75 percent.

2. The code will return coordinates for the top-left corner of the rectangle that contains the brand logo of the displayed brands.
Yes, the code will return the coordinates for the top-left corner of the rectangle that contains the brand logo of the displayed brands.

3. The code will return coordinates for the bottom-right corner of the rectangle that contains the brand logo of the displayed brands.
No, the code will not return coordinates for the bottom-right corner of the rectangle that contains the brand logo of the displayed brands. The code is printing the width and height of the rectangle instead.

Comment 12

ID: 770663 User: VinnieG Badges: - Relative Date: 3 years, 2 months ago Absolute Date: Mon 09 Jan 2023 17:48 Selected Answer: - Upvotes: 2

it could be a trap : so yes, no (it is rectangle.x and not _x) , no (should be x + w as the service returns the width and top left corner :

Console.WriteLine("Brands:");
foreach (var brand in results.Brands)
{
Console.WriteLine($"Logo of {brand.Name} with confidence {brand.Confidence} at location {brand.Rectangle.X}, " +
$"{brand.Rectangle.X + brand.Rectangle.W}, {brand.Rectangle.Y}, {brand.Rectangle.Y + brand.Rectangle.H}");
}
Console.WriteLine();

https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/call-analyze-image?tabs=csharp

Comment 12.1

ID: 1163309 User: rober13 Badges: - Relative Date: 2 years ago Absolute Date: Fri 01 Mar 2024 08:55 Selected Answer: - Upvotes: 1

it is true, it is a trick.
Yes,No,No

Comment 12.2

ID: 1214153 User: TJ001 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 08:47 Selected Answer: - Upvotes: 1

hope it is a typo. explanation is spot on

Comment 13

ID: 744131 User: rafael0 Badges: - Relative Date: 3 years, 2 months ago Absolute Date: Tue 13 Dec 2022 15:31 Selected Answer: - Upvotes: 3

its' yes
yes
no
The coordinates are always regarding the top left point of the rectangle

Comment 14

ID: 693925 User: oliverio Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Thu 13 Oct 2022 15:21 Selected Answer: - Upvotes: 1

Y
Y
Y
the code will return the coordinates for any position

Comment 15

ID: 686136 User: Anulf Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Tue 04 Oct 2022 14:09 Selected Answer: - Upvotes: 1

yes
yes
yes

Comment 15.1

ID: 686520 User: Davard Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Wed 05 Oct 2022 04:08 Selected Answer: - Upvotes: 2

What makes the third one "yes"?

Comment 15.1.1

ID: 689515 User: Anulf Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Sat 08 Oct 2022 18:44 Selected Answer: - Upvotes: 2

According to the microsoft Documetn, Ithought so. What makes yu think it is No ?

Comment 15.1.1.1

ID: 705570 User: GigaCaster Badges: - Relative Date: 3 years, 4 months ago Absolute Date: Thu 27 Oct 2022 15:07 Selected Answer: - Upvotes: 2

if you look at the code it is _x and not .x

Comment 15.1.1.1.1

ID: 805925 User: AzureJobsTillRetire Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Feb 2023 04:24 Selected Answer: - Upvotes: 2

I think that is a typo

75. AI-102 Topic 2 Question 13

Sequence
236
Discussion ID
54793
Source URL
https://www.examtopics.com/discussions/microsoft/view/54793-exam-ai-102-topic-2-question-13-discussion/
Posted By
motu
Posted At
June 7, 2021, 1:03 p.m.

Question

DRAG DROP -
You are developing a photo application that will find photos of a person based on a sample image by using the Face API.
You need to create a POST request to find the photos.
How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all.
You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand


Comments 20 comments Click to expand

Comment 1

ID: 376711 User: motu Badges: Highly Voted Relative Date: 4 years, 9 months ago Absolute Date: Mon 07 Jun 2021 13:03 Selected Answer: - Upvotes: 80

Box 1 is "findsimilars", others do not match the given request body and make no sense anyway. https://docs.microsoft.com/en-us/rest/api/faceapi/face/find-similar

Comment 1.1

ID: 381503 User: leo822 Badges: - Relative Date: 4 years, 9 months ago Absolute Date: Mon 14 Jun 2021 05:41 Selected Answer: - Upvotes: 2

cool. correct answer!

Comment 1.2

ID: 397079 User: idrisfl Badges: - Relative Date: 4 years, 8 months ago Absolute Date: Fri 02 Jul 2021 19:34 Selected Answer: - Upvotes: 4

definitely find-similar, as it is the only one whose body parameters correspond

Comment 2

ID: 633004 User: Eltooth Badges: Highly Voted Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 14:06 Selected Answer: - Upvotes: 16

findsimilars and matchPerson

https://docs.microsoft.com/en-us/rest/api/faceapi/face/find-similar?tabs=HTTP#find-similar-results-example

"matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds.

"matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.

Comment 3

ID: 1276618 User: famco Badges: Most Recent Relative Date: 1 year, 6 months ago Absolute Date: Mon 02 Sep 2024 12:57 Selected Answer: - Upvotes: 3

find similar and matchperson
matchperson because it is identifying a specific person, not finding similar faces

Comment 4

ID: 1248472 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 15 Jul 2024 19:24 Selected Answer: - Upvotes: 1

1. findsimilars
2. matchPerson

Comment 5

ID: 1234509 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 16:55 Selected Answer: - Upvotes: 1

1. findsimilars
2. matchPerson

Comment 6

ID: 1220256 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:54 Selected Answer: - Upvotes: 1

1. findsimilars
2. matchPerson

Comment 7

ID: 1218348 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 15:31 Selected Answer: - Upvotes: 1

1. findsimilars
2. matchPerson

Comment 8

ID: 1182434 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 12:51 Selected Answer: - Upvotes: 1

Final Answer:
findsimilars and matchPerson

Comment 9

ID: 560580 User: reachmymind Badges: - Relative Date: 4 years ago Absolute Date: Fri 04 Mar 2022 08:09 Selected Answer: - Upvotes: 5

Box 1: findsimilars
Box 2: matchPerson

https://dev.cognitive.azure.cn/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237

Comment 10

ID: 544118 User: bitcoin89 Badges: - Relative Date: 4 years, 1 month ago Absolute Date: Wed 09 Feb 2022 23:02 Selected Answer: - Upvotes: 2

FIRST BOX IDEBTIFY SECOND BOX NOTHING
POST {Endpoint}/face/v1.0/identify
Ocp-Apim-Subscription-Key: {API key}
{
"largePersonGroupId": "sample_group",
"faceIds": [
"c5c24a82-6845-4031-9d5d-978df9175426",
"65d083d4-9447-47d1-af30-b626144bf0fb"
],
"maxNumOfCandidatesReturned": 1,
"confidenceThreshold": 0.5
}

Comment 11

ID: 517052 User: sumanshu Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Wed 05 Jan 2022 00:32 Selected Answer: - Upvotes: 1

Box 1 - FindSimilar
Box 2 - matchPerson (We have to find based on a sample photo)

Comment 12

ID: 473421 User: mikegsm Badges: - Relative Date: 4 years, 4 months ago Absolute Date: Sat 06 Nov 2021 12:19 Selected Answer: - Upvotes: 1

Seems FIND SIMILAR AND MATCHPERSON

Comment 13

ID: 464156 User: DeBoer Badges: - Relative Date: 4 years, 4 months ago Absolute Date: Mon 18 Oct 2021 15:46 Selected Answer: - Upvotes: 2

Looking at the ENTIRE document the answer has to be findsimilar: You cannot send the properties like faceListID and largeFaceListId to /detect

Comment 14

ID: 438522 User: nitkat Badges: - Relative Date: 4 years, 6 months ago Absolute Date: Fri 03 Sep 2021 14:52 Selected Answer: - Upvotes: 4

The Answer is correct. The question asks to "find photos of a person based on a sample image". Key is "based on a sample image". Only detect does this : https://docs.microsoft.com/en-us/rest/api/faceapi/face/detect-with-url. Find Similars is used to search the similar-looking faces from a faceId array, a face list or a large face list

Comment 14.1

ID: 464198 User: Zoul Badges: - Relative Date: 4 years, 4 months ago Absolute Date: Mon 18 Oct 2021 17:51 Selected Answer: - Upvotes: 1

detect does not take faceid. Cannot be detect !

Comment 15

ID: 429732 User: Banye27 Badges: - Relative Date: 4 years, 6 months ago Absolute Date: Mon 23 Aug 2021 08:34 Selected Answer: - Upvotes: 2

https://docs.microsoft.com/en-us/rest/api/faceapi/face/find-similar

Comment 16

ID: 394786 User: azurelearner666 Badges: - Relative Date: 4 years, 8 months ago Absolute Date: Wed 30 Jun 2021 15:49 Selected Answer: - Upvotes: 2

Correct!

Comment 17

ID: 385351 User: Dalias Badges: - Relative Date: 4 years, 8 months ago Absolute Date: Sat 19 Jun 2021 10:47 Selected Answer: - Upvotes: 4

Motu is correct. verified the link too
https://docs.microsoft.com/en-us/rest/api/faceapi/face/find-similar

76. AI-102 Topic 1 Question 69

Sequence
237
Discussion ID
135061
Source URL
https://www.examtopics.com/discussions/microsoft/view/135061-exam-ai-102-topic-1-question-69-discussion/
Posted By
audlindr
Posted At
March 2, 2024, 6:47 p.m.

Question

You are building an internet-based training solution. The solution requires that a user's camera and microphone remain enabled.

You need to monitor a video stream of the user and detect when the user asks an instructor a question. The solution must minimize development effort.

What should you include in the solution?

  • A. speech-to-text in the Azure AI Speech service
  • B. language detection in Azure AI Language Service
  • C. the Face service in Azure AI Vision
  • D. object detection in Azure AI Custom Vision

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 17 comments Click to expand

Comment 1

ID: 1224763 User: Belicova Badges: Highly Voted Relative Date: 1 year, 9 months ago Absolute Date: Wed 05 Jun 2024 16:02 Selected Answer: - Upvotes: 5

Go with D
From Copilot:
To monitor a video stream of the user and detect when the user asks an instructor a question while minimizing development effort, consider using object detection. Specifically, you can leverage existing models or frameworks (such as YOLOv3) to detect people in real-time from the video stream1. Once you identify a person asking a question, you can trigger further actions or alerts. This approach avoids the complexity of speech-to-text or language detection and focuses on the specific task at hand. Therefore, go with D. object detection in Azure AI Custom Vision!

Comment 2

ID: 1276143 User: MostafaAbdellahAhmed Badges: Most Recent Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 17:20 Selected Answer: - Upvotes: 3

A. Speech-to-text in the Azure AI Speech service
Explanation:
Speech-to-Text in the Azure AI Speech service can transcribe spoken language into text in real time, enabling the detection of questions when users speak. This approach minimizes development effort by directly converting speech to text, allowing easy identification of questions.

Comment 3

ID: 1246436 User: SAMBIT Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Fri 12 Jul 2024 04:59 Selected Answer: - Upvotes: 1

Definitely its not A. That's a bunker

Comment 4

ID: 1235138 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 07:33 Selected Answer: A Upvotes: 1

A is correct answer.

Comment 5

ID: 1224483 User: anto69 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 05 Jun 2024 05:01 Selected Answer: A Upvotes: 1

To minimize effort: A is enough

Comment 6

ID: 1213778 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 19 May 2024 14:46 Selected Answer: A Upvotes: 2

user's camera and microphone remain enabled, so A is right.

Comment 7

ID: 1205026 User: anntv252 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 01 May 2024 13:02 Selected Answer: A Upvotes: 1

Because user's camera and microphone remain enabled. Azure AI Speech service is recommend for using

Comment 8

ID: 1197410 User: Barry123456 Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 17 Apr 2024 21:16 Selected Answer: - Upvotes: 2

It says video stream. It doesn't say the video stream has audio. I deal with video only streams all day. Don't assume.

Comment 9

ID: 1194713 User: sivapolam90 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 13 Apr 2024 08:59 Selected Answer: A Upvotes: 1

A. speech-to-text in the Azure AI Speech service

Comment 10

ID: 1190896 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sun 07 Apr 2024 12:26 Selected Answer: A Upvotes: 2

The best option for this scenario would be A. speech-to-text in the Azure AI Speech service.

This service can transcribe the user’s spoken words into written text, which can then be analyzed to detect when a question is being asked. This would be more efficient and direct for detecting questions in a video stream, compared to the other options which focus on language detection, face recognition, and object detection. These other services might not be as effective for this specific use-case.

Comment 11

ID: 1187883 User: NullVoider_0 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 02 Apr 2024 08:28 Selected Answer: A Upvotes: 1

A. speech-to-text in the Azure AI Speech service

This service can transcribe the spoken words into text in real-time, which can then be analyzed to detect questions. It’s an efficient way to monitor for specific verbal cues or keywords that indicate a question is being asked, without the need for extensive programming or manual review. This approach minimizes development effort while providing a robust solution for the requirement.

Comment 12

ID: 1175024 User: Murtuza Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Sat 16 Mar 2024 15:37 Selected Answer: - Upvotes: 3

The correct CHOICE is C. I made a silly typo but my explanations are right on point.

Comment 13

ID: 1175020 User: Murtuza Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Sat 16 Mar 2024 15:33 Selected Answer: - Upvotes: 3

The other options are not directly relevant to detecting user questions in a video stream:

Speech-to-text (Option A): Converts spoken language into text. While useful for transcribing audio, it doesn’t directly address identifying user questions.
Language detection (Option B): Determines the language of text. It’s not specifically designed for monitoring video streams or detecting questions.
Object detection (Option D): Identifies objects within images, but it’s not suitable for detecting user interactions or questions.
Therefore, Option C (the Face service in Azure AI Vision) is the most appropriate choice for your scenario.

Comment 14

ID: 1175019 User: Murtuza Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Sat 16 Mar 2024 15:33 Selected Answer: A Upvotes: 1

Face Service (Azure AI Vision):
The Face service provides facial recognition capabilities, which can be used to identify when a user is facing the camera (e.g., looking at the instructor).
By analyzing facial features, expressions, and head movements, you can detect when a user is likely to be asking a question.
This approach minimizes development effort because it directly addresses the requirement of monitoring the video stream for user interactions.

Comment 14.1

ID: 1183738 User: Mehe323 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 27 Mar 2024 01:19 Selected Answer: - Upvotes: 4

The user can talk, but it doesn't have to be a question. I think the focus should be on detecting whether something is a question or not and for that, you need speech to text first. Face doesn't make sense as identifying questions is not the purpose of that service: 'The Azure AI Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identification, touchless access control, and face blurring for privacy.'

https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview-identity

Comment 15

ID: 1171142 User: chandiochan Badges: - Relative Date: 2 years ago Absolute Date: Mon 11 Mar 2024 17:38 Selected Answer: A Upvotes: 2

speech-to-text in the Azure AI Speech service/

This service can transcribe spoken words into written text in real-time, allowing you to monitor the audio for specific triggers, like questions, which can then be further processed or flagged for response. This solution is efficient and requires minimal development effort for integrating audio streaming and speech recognition capabilities.

Comment 15.1

ID: 1181173 User: AlviraTony Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Sat 23 Mar 2024 22:49 Selected Answer: - Upvotes: 1

[ChatGPT]
A. Speech-to-text in the Azure AI Speech service.

Explanation:

Speech-to-text functionality can convert spoken words into text, allowing you to analyze the content of the speech.
By using speech-to-text, you can transcribe the user's spoken questions and then analyze the text to detect if a question is being asked to the instructor.
This option aligns with the requirement to monitor the user's speech in real-time without significant development effort.

77. AI-102 Topic 1 Question 55

Sequence
238
Discussion ID
112134
Source URL
https://www.examtopics.com/discussions/microsoft/view/112134-exam-ai-102-topic-1-question-55-discussion/
Posted By
973b658
Posted At
June 14, 2023, 8:38 a.m.

Question

HOTSPOT
-

You have an Azure subscription that has the following configurations:

• Subscription ID: 8d3591aa-96b8-4737-ad09-00f9b1ed35ad
• Tenant ID: 3edfe572-cb54-3ced-ae12-c5c177f39a12

You plan to create a resource that will perform sentiment analysis and optical character recognition (OCR).

You need to use an HTTP request to create the resource in the subscription. The solution must use a single key and endpoint.

How should you complete the request? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 3 comments Click to expand

Comment 1

ID: 995903 User: kiro_kocha Badges: - Relative Date: 1 year, 6 months ago Absolute Date: Sun 01 Sep 2024 11:39 Selected Answer: - Upvotes: 1

The solution must use a single key and endpoint. - Cognitive Services
You need to use an HTTP request to create the resource in the subscription. - obviously subscription

Comment 2

ID: 923792 User: Tin_Tin Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 15 Jun 2024 08:46 Selected Answer: - Upvotes: 3

The answer is correct. https://learn.microsoft.com/en-us/rest/api/resources/resources/create-or-update

Comment 3

ID: 922807 User: 973b658 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 14 Jun 2024 08:38 Selected Answer: - Upvotes: 2

It is true.

78. AI-102 Topic 2 Question 7

Sequence
247
Discussion ID
62223
Source URL
https://www.examtopics.com/discussions/microsoft/view/62223-exam-ai-102-topic-2-question-7-discussion/
Posted By
Derin_tade
Posted At
Sept. 16, 2021, 5:45 p.m.

Question

DRAG DROP -
You have a Custom Vision resource named acvdev in a development environment.
You have a Custom Vision resource named acvprod in a production environment.
In acvdev, you build an object detection model named obj1 in a project named proj1.
You need to move obj1 to acvprod.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand

Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/copy-move-projects

Comments 9 comments Click to expand

Comment 1

ID: 500780 User: snna4 Badges: Highly Voted Relative Date: 4 years, 2 months ago Absolute Date: Mon 13 Dec 2021 18:34 Selected Answer: - Upvotes: 41

1. GetProjects on acvDEV
2. ExportProjects on acvDEV
3. ImportProjects on avcPROD

Comment 2

ID: 1266130 User: anto69 Badges: Most Recent Relative Date: 1 year, 6 months ago Absolute Date: Thu 15 Aug 2024 04:04 Selected Answer: - Upvotes: 2

1. GetProjects on acvDEV
2. ExportProjects on acvDEV
3. ImportProjects on avcPROD

Comment 3

ID: 1220249 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:50 Selected Answer: - Upvotes: 1

1. Use GetProjects endpoint on acvDEV
2. Use ExportProjects endpoint on acvDEV
3. Use ImportProjects endpoint on avcPROD

Comment 4

ID: 1182402 User: varinder82 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 25 Mar 2024 11:45 Selected Answer: - Upvotes: 2

Final Answer:
1. GetProjects on acvDEV
2. ExportProjects on acvDEV
3. ImportProjects on avcPROD

Comment 5

ID: 632808 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 05:48 Selected Answer: - Upvotes: 1

Get on Dev
Export on Dev
Import on Prod

Comment 6

ID: 629382 User: Jzerpa_ccs Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Sun 10 Jul 2022 01:59 Selected Answer: - Upvotes: 3

1. GetProjects on acvDEV
2. ExportProjects on acvDEV
3. ImportProjects on avcPROD

https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/copy-move-projects

Comment 7

ID: 446069 User: Derin_tade Badges: - Relative Date: 4 years, 5 months ago Absolute Date: Thu 16 Sep 2021 17:45 Selected Answer: - Upvotes: 3

Given link proves this is correct.

Comment 7.1

ID: 451478 User: DS_sam2701 Badges: - Relative Date: 4 years, 5 months ago Absolute Date: Sat 25 Sep 2021 17:05 Selected Answer: - Upvotes: 2

Here in this document it is clearly mentioned how can you move your resource from dev. to prod. : https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-tutorial-pattern#what-did-this-tutorial-accomplish

Comment 7.1.1

ID: 464644 User: fhqhfhqh Badges: - Relative Date: 4 years, 4 months ago Absolute Date: Tue 19 Oct 2021 14:21 Selected Answer: - Upvotes: 2

Provided link is for LUIS. Incorrect Link.

79. AI-102 Topic 3 Question 28

Sequence
257
Discussion ID
88194
Source URL
https://www.examtopics.com/discussions/microsoft/view/88194-exam-ai-102-topic-3-question-28-discussion/
Posted By
halfway
Posted At
Nov. 21, 2022, 2:13 p.m.

Question

You need to measure the public perception of your brand on social media by using natural language processing.
Which Azure service should you use?

  • A. Language service
  • B. Content Moderator
  • C. Computer Vision
  • D. Form Recognizer

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 1261846 User: moonlightc Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Wed 07 Aug 2024 00:28 Selected Answer: A Upvotes: 4

Language Service to detect the sentiment

Comment 2

ID: 1229232 User: reigenchimpo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 12 Jun 2024 16:19 Selected Answer: A Upvotes: 1

The answer is A, although it may not make sense for a moment. Please read the explanation carefully.

Comment 3

ID: 1217520 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 15:45 Selected Answer: A Upvotes: 1

It must be A.

Comment 4

ID: 1160847 User: anto69 Badges: - Relative Date: 2 years ago Absolute Date: Tue 27 Feb 2024 18:04 Selected Answer: A Upvotes: 1

A the only meaningful

Comment 5

ID: 1130243 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Wed 24 Jan 2024 06:25 Selected Answer: A Upvotes: 3

no doubt A

Comment 6

ID: 723528 User: halfway Badges: - Relative Date: 3 years, 3 months ago Absolute Date: Mon 21 Nov 2022 14:13 Selected Answer: A Upvotes: 2

Text Analytics, sentiment analysis: https://azure.microsoft.com/en-us/products/cognitive-services/text-analytics/#features

80. AI-102 Topic 2 Question 26

Sequence
259
Discussion ID
110625
Source URL
https://www.examtopics.com/discussions/microsoft/view/110625-exam-ai-102-topic-2-question-26-discussion/
Posted By
mmaguero
Posted At
May 31, 2023, 9:52 a.m.

Question

DRAG DROP
-

You need to analyze video content to identify any mentions of specific company names.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Suggested Answer

image
Answer Description Click to expand


Comments 6 comments Click to expand

Comment 1

ID: 1248790 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Tue 16 Jul 2024 11:10 Selected Answer: - Upvotes: 1

1. Sign in to Azure Video Analyzer for Media website
2. From Content model customization, select Brands
3. Add specific company names to include list

Comment 2

ID: 1220318 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:07 Selected Answer: - Upvotes: 3

1. Sign in website ,analyzer
2. From Brands
3. Add include list

Comment 3

ID: 1214363 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 15:47 Selected Answer: - Upvotes: 2

Sign in website, From Brands, Add include list.
I memorized this line.

Comment 4

ID: 1152264 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:43 Selected Answer: - Upvotes: 3

login to azure video analyzer==>select brands tab to customize the analyzer==>add specific company names to the "include" list

Comment 5

ID: 1126626 User: josebernabeo Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Fri 19 Jan 2024 12:10 Selected Answer: - Upvotes: 1

Now it's call "Azure AI Video Indexer website "

Comment 6

ID: 910975 User: mmaguero Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Wed 31 May 2023 09:52 Selected Answer: - Upvotes: 3

ChatGPT agree:
Answer Area:

Sign in to the Azure Video Analyzer for Media website.
From Content model customization, select Brands.
Add the specific company names to the include list.

81. AI-102 Topic 2 Question 29

Sequence
261
Discussion ID
108601
Source URL
https://www.examtopics.com/discussions/microsoft/view/108601-exam-ai-102-topic-2-question-29-discussion/
Posted By
Rob77
Posted At
May 6, 2023, 5:51 a.m.

Question

HOTSPOT
-

You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands.

You have the following code segment.

image

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 7 comments Click to expand

Comment 1

ID: 1248477 User: krzkrzkra Badges: - Relative Date: 1 year, 7 months ago Absolute Date: Mon 15 Jul 2024 19:27 Selected Answer: - Upvotes: 1

YYN is the answer.

Comment 2

ID: 1220327 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:15 Selected Answer: - Upvotes: 1

The answer is YYN.

Comment 3

ID: 1214367 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 15:52 Selected Answer: - Upvotes: 1

Yes Yes No

Comment 4

ID: 1132969 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 27 Jan 2024 02:05 Selected Answer: - Upvotes: 2

answer is correct: Y Y N
response only display the top left corner and width and height from this origin

Comment 5

ID: 1006049 User: jangotango Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Tue 12 Sep 2023 23:36 Selected Answer: - Upvotes: 1

YYY - the last one is Y because all coordinates together give you top left and bottom right.

Comment 5.1

ID: 1012872 User: AnonymousJhb Badges: - Relative Date: 2 years, 5 months ago Absolute Date: Thu 21 Sep 2023 09:02 Selected Answer: - Upvotes: 3

YYN. not the bottom right

Comment 6

ID: 890453 User: Rob77 Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Sat 06 May 2023 05:51 Selected Answer: - Upvotes: 1

Correct YYN. The last one is width and height.

82. AI-102 Topic 4 Question 24

Sequence
278
Discussion ID
143503
Source URL
https://www.examtopics.com/discussions/microsoft/view/143503-exam-ai-102-topic-4-question-24-discussion/
Posted By
anto69
Posted At
July 7, 2024, 2:53 p.m.

Question

HOTSPOT
-

You are building a language learning solution.

You need to recommend which Azure services can be used to perform the following tasks:

• Analyze lesson plans submitted by teachers and extract key fields, such as lesson times and required texts.
• Analyze learning content and provide students with pictures that represent commonly used words or phrases in the text.

The solution must minimize development effort.

Which Azure service should you recommend for each task? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

image

Suggested Answer

image
Answer Description Click to expand


Comments 2 comments Click to expand

Comment 1

ID: 1244974 User: Toby86 Badges: Highly Voted Relative Date: 1 year, 8 months ago Absolute Date: Tue 09 Jul 2024 17:27 Selected Answer: - Upvotes: 8

Answer is correct.
Immersive Reader can indeed provide pictures for commonly used words in text:

https://learn.microsoft.com/en-us/azure/ai-services/immersive-reader/overview#display-pictures-for-common-words

Comment 2

ID: 1243854 User: anto69 Badges: Highly Voted Relative Date: 1 year, 8 months ago Absolute Date: Sun 07 Jul 2024 14:53 Selected Answer: - Upvotes: 5

plans: Azure AI Document Intelligence
content: Immersive Reader

confirmed by ChatGPT

83. AI-102 Topic 9 Question 1

Sequence
280
Discussion ID
78075
Source URL
https://www.examtopics.com/discussions/microsoft/view/78075-exam-ai-102-topic-9-question-1-discussion/
Posted By
ninjia
Posted At
Aug. 17, 2022, 7:36 p.m.

Question

DRAG DROP -
You are planning the product creation project.
You need to recommend a process for analyzing videos.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand

Scenario: All videos must have transcripts that are associated to the video and included in product descriptions.
Product descriptions, transcripts, and alt text must be available in English, Spanish, and Portuguese.
Step 1: Upload the video to blob storage
Given a video or audio file, the file is first dropped into a Blob Storage. T
Step 2: Index the video by using the Video Indexer API.
When a video is indexed, Video Indexer produces the JSON content that contains details of the specified video insights. The insights include: transcripts, OCRs, faces, topics, blocks, etc.
Step 3: Extract the transcript from the Video Indexer API.
Step 4: Translate the transcript by using the Translator API.
Reference:
https://azure.microsoft.com/en-us/blog/get-video-insights-in-even-more-languages/ https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-output-json-v2

Comments 6 comments Click to expand

Comment 1

ID: 648172 User: ninjia Badges: Highly Voted Relative Date: 3 years, 6 months ago Absolute Date: Wed 17 Aug 2022 19:36 Selected Answer: - Upvotes: 11

1. Upload the video to blob storage. - choose blob over file
2. Index the video by using the Video Indexer API. - choose Video Indexer over Computer
Vision API.
3. Extract the transcript from the Video Indexer API.
4. Translate the transcript by using the Translator API. - Support requirement "Remove the need for manual translations."

Reference:
https://docs.microsoft.com/en-us/azure/azure-video-indexer/video-indexer-overview#what-can-i-do-with-azure-video-indexer

Comment 1.1

ID: 771614 User: am20 Badges: - Relative Date: 3 years, 2 months ago Absolute Date: Tue 10 Jan 2023 17:32 Selected Answer: - Upvotes: 2

agree as the other options doesn't make sense. however videos are already in storage account

Comment 2

ID: 1236068 User: rookiee1111 Badges: Most Recent Relative Date: 1 year, 8 months ago Absolute Date: Mon 24 Jun 2024 02:22 Selected Answer: - Upvotes: 1

1. Upload to Azure Blob Storage
2. Index the video to Azure video indexer for media
3. Extract transcript Azure video indexer for media
4. Use translator API to translate.

Comment 3

ID: 1133366 User: evangelist Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 27 Jan 2024 14:21 Selected Answer: - Upvotes: 4

(1) Upload the video to blob storage: Store the video in a reliable and scalable location like Azure Blob Storage. This enables access for subsequent analysis steps.
(2) Index the video by using the Azure Video Analyzer for Media (previously Video Indexer) API: Utilize Video Analyzer for Media to automatically extract key information like objects, actions, and emotions from the video. This provides valuable context for further analysis.
(3) Extract the transcript from the Azure Video Analyzer for Media (previously Video Indexer) API: Leverage the same Video Analyzer for Media API to generate a transcript of the spoken audio in the video. This enables textual analysis of the content.
(4) Translate the transcript by using the Translator API (optional): If your target audience uses languages other than the source language of the video, utilize the Translator API to translate the extracted transcript into the desired languages. This enables multilingual product descriptions or accessibility features.

Comment 4

ID: 990781 User: tranatrana Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Sat 26 Aug 2023 15:21 Selected Answer: - Upvotes: 2

Scenario: Scenario: All videos must have transcripts that are associated to the video and included in product descriptions.
Product descriptions, transcripts, and alt text must be available in English, Spanish, and Portuguese.
Otherwise, it does not make sense!

Comment 5

ID: 916793 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Wed 07 Jun 2023 04:24 Selected Answer: - Upvotes: 3

answer is correct

84. AI-102 Topic 1 Question 36

Sequence
281
Discussion ID
125119
Source URL
https://www.examtopics.com/discussions/microsoft/view/125119-exam-ai-102-topic-1-question-36-discussion/
Posted By
rdemontis
Posted At
Nov. 1, 2023, 5:20 p.m.

Question

SIMULATION -
You need to create a Form Recognizer resource named fr12345678.
Use the Form Recognizer sample labeling tool at https://fott-2-1.azurewebsites.net/ to analyze the invoice located in the C:\Resources\Invoices folder.
Save the results as C:\Resources\Invoices\Results.json.
To complete this task, sign in to the Azure portal and open the Form Recognizer sample labeling tool.

Suggested Answer

image
Answer Description Click to expand


Comments 7 comments Click to expand

Comment 1

ID: 1235455 User: p2pitu Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 17:03 Selected Answer: - Upvotes: 2

I guess they are here to increase the page size so that more question can be behind paywall.

Comment 2

ID: 1217598 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:38 Selected Answer: - Upvotes: 1

Simulation questions will not appear on the actual exam as of May 25, 2024; ET should remove this type of question.

Comment 3

ID: 1193416 User: michaelmorar Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 11 Apr 2024 06:15 Selected Answer: - Upvotes: 1

The issue I found when trying this is that you cannot choose the downloaded JSON file name. So one cannot ensure that the file is called Results.json - the studio names it for you.

Comment 3.1

ID: 1207427 User: MonicaKarim Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Mon 06 May 2024 18:11 Selected Answer: - Upvotes: 1

don't choose download, instead press ctr+s then save as C:\Resources\Invoices\Results.json

Comment 4

ID: 1173299 User: GHill1982 Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Thu 14 Mar 2024 10:51 Selected Answer: - Upvotes: 3

I think this would be the method as of March 2024:
1. Create a Document Intelligence resource in Azure AI services
2. Launch Document Intelligence Studio
3. Select Invoices from the Prebuilt models
4. Configure the service resource by selecting your Document Intelligence Resource
5. Drag & drop of browse for the invoice in C:\Resources\Invoices
6. Click Run analysis
7. Click on Result and the Download icon to save the JSON results file

Comment 5

ID: 1123899 User: josebernabeo Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Tue 16 Jan 2024 06:59 Selected Answer: - Upvotes: 3

Are these types of questions going to be on the exam?

Comment 5.1

ID: 1164255 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 18:41 Selected Answer: - Upvotes: 6

It was not on 2-Mar-2024

85. AI-102 Topic 2 Question 20

Sequence
290
Discussion ID
91092
Source URL
https://www.examtopics.com/discussions/microsoft/view/91092-exam-ai-102-topic-2-question-20-discussion/
Posted By
HotDurian
Posted At
Dec. 12, 2022, 1:51 a.m.

Question

You need to build a solution that will use optical character recognition (OCR) to scan sensitive documents by using the Computer Vision API. The solution must
NOT be deployed to the public cloud.
What should you do?

  • A. Build an on-premises web app to query the Computer Vision endpoint.
  • B. Host the Computer Vision endpoint in a container on an on-premises server.
  • C. Host an exported Open Neural Network Exchange (ONNX) model on an on-premises server.
  • D. Build an Azure web app to query the Computer Vision endpoint.

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 1235172 User: HaraTadahisa Badges: - Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:15 Selected Answer: B Upvotes: 3

I say this answer is B. Please hurry up and transport the meat.

Comment 2

ID: 1220335 User: taiwan_is_not_china Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 16:20 Selected Answer: B Upvotes: 3

B is the correct answer.

Comment 3

ID: 1214381 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:06 Selected Answer: B Upvotes: 3

B is right answer.

Comment 4

ID: 1152261 User: evangelist Badges: - Relative Date: 2 years ago Absolute Date: Fri 16 Feb 2024 23:38 Selected Answer: B Upvotes: 1

model is hosted on-premise but the billing information has to be provided to the on-premise container to report the usage

Comment 5

ID: 839794 User: RAN_L Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Wed 15 Mar 2023 11:53 Selected Answer: B Upvotes: 2

B. Host the Computer Vision endpoint in a container on an on-premises server.

Since the solution should not be deployed to the public cloud, option B is the correct answer. By hosting the Computer Vision endpoint in a container on an on-premises server, the solution can still leverage the capabilities of the Computer Vision API while keeping the processing and data within the on-premises environment. Option A and D both involve using a web app, which would likely require hosting in the public cloud. Option C involves hosting an exported ONNX model, which may not have the same capabilities as the Computer Vision API.

Comment 6

ID: 742258 User: HotDurian Badges: - Relative Date: 3 years, 3 months ago Absolute Date: Mon 12 Dec 2022 01:51 Selected Answer: B Upvotes: 2

Answer is correct.

86. AI-102 Topic 3 Question 70

Sequence
293
Discussion ID
135087
Source URL
https://www.examtopics.com/discussions/microsoft/view/135087-exam-ai-102-topic-3-question-70-discussion/
Posted By
chandiochan
Posted At
March 3, 2024, 2:04 a.m.

Question

You are designing a content management system.

You need to ensure that the reading experience is optimized for users who have reduced comprehension and learning differences, such as dyslexia. The solution must minimize development effort.

Which Azure service should you include in the solution?

  • A. Azure AI Immersive Reader
  • B. Azure AI Translator
  • C. Azure AI Document Intelligence
  • D. Azure AI Language

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 5 comments Click to expand

Comment 1

ID: 1164439 User: chandiochan Badges: Highly Voted Relative Date: 2 years ago Absolute Date: Sun 03 Mar 2024 02:04 Selected Answer: - Upvotes: 9

https://learn.microsoft.com/en-us/azure/ai-services/immersive-reader/overview

Comment 1.1

ID: 1177388 User: rober13 Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Tue 19 Mar 2024 15:00 Selected Answer: - Upvotes: 2

thanks for the reference!

Comment 2

ID: 1235181 User: HaraTadahisa Badges: Most Recent Relative Date: 1 year, 8 months ago Absolute Date: Sat 22 Jun 2024 08:19 Selected Answer: A Upvotes: 3

Immersive Reader

Comment 3

ID: 1229207 User: reigenchimpo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Wed 12 Jun 2024 16:09 Selected Answer: A Upvotes: 2

A is answer.
dyslexia user is good to use "Immersive Reader".

Comment 4

ID: 1196948 User: michaelmorar Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Wed 17 Apr 2024 04:40 Selected Answer: A Upvotes: 2

Immersive AI improves accessibility.

87. AI-102 Topic 2 Question 4

Sequence
295
Discussion ID
77130
Source URL
https://www.examtopics.com/discussions/microsoft/view/77130-exam-ai-102-topic-2-question-4-discussion/
Posted By
Daemon69
Posted At
June 28, 2022, 4:15 a.m.

Question

DRAG DROP -
You are developing a webpage that will use the Azure Video Analyzer for Media (previously Video Indexer) service to display videos of internal company meetings.
You embed the Player widget and the Cognitive Insights widget into the page.
You need to configure the widgets to meet the following requirements:
✑ Ensure that users can search for keywords.
✑ Display the names and faces of people in the video.
✑ Show captions in the video in English (United States).
How should you complete the URL for each widget? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
image

Suggested Answer

image
Answer Description Click to expand

Reference:
https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-embed-widgets

Comments 7 comments Click to expand

Comment 1

ID: 636586 User: Eltooth Badges: Highly Voted Relative Date: 3 years, 7 months ago Absolute Date: Mon 25 Jul 2022 11:04 Selected Answer: - Upvotes: 10

Answer is correct.

https://docs.microsoft.com/en-us/azure/azure-video-indexer/video-indexer-embed-widgets

Comment 2

ID: 1234515 User: HaraTadahisa Badges: Most Recent Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 16:58 Selected Answer: - Upvotes: 1

1. widgets is people,keywords
2. controls is search
3. showcaptions is true
4. captions is en-US

Comment 3

ID: 1220241 User: hatanaoki Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Tue 28 May 2024 14:41 Selected Answer: - Upvotes: 1

widgets: people,keywords
controls: search
showcaptions: true
captions: en-US

Comment 4

ID: 1218333 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sat 25 May 2024 14:59 Selected Answer: - Upvotes: 1

widgets=people,keywords
controls=search
showcaptions=true
captions=en-US

Comment 5

ID: 1214394 User: takaimomoGcup Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Mon 20 May 2024 16:15 Selected Answer: - Upvotes: 1

widgets=people,keywords; controls=search; showcaptions=true; captions=en-US

Comment 6

ID: 623594 User: Daemon69 Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Tue 28 Jun 2022 04:16 Selected Answer: - Upvotes: 1

https://docs.microsoft.com/en-us/azure/azure-video-indexer/video-indexer-embed-widgets

Comment 7

ID: 623593 User: Daemon69 Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Tue 28 Jun 2022 04:15 Selected Answer: - Upvotes: 2

Cognitive Insights widget - answer is correct
Player widget - answer is correct

88. AI-102 Topic 3 Question 3

Sequence
296
Discussion ID
56452
Source URL
https://www.examtopics.com/discussions/microsoft/view/56452-exam-ai-102-topic-3-question-3-discussion/
Posted By
azurelearner666
Posted At
June 30, 2021, 7:09 p.m.

Question

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You develop an application to identify species of flowers by training a Custom Vision model.
You receive images of new flower species.
You need to add the new images to the classifier.
Solution: You add the new images and labels to the existing model. You retrain the model, and then publish the model.
Does this meet the goal?

  • A. Yes
  • B. No

Suggested Answer

A

Answer Description Click to expand


Community Answer Votes

Comments 6 comments Click to expand

Comment 1

ID: 394988 User: azurelearner666 Badges: Highly Voted Relative Date: 4 years, 8 months ago Absolute Date: Wed 30 Jun 2021 19:09 Selected Answer: - Upvotes: 18

Correct!
uploading, tagging, retraining and publishing the model

Comment 2

ID: 612715 User: PHD_CHENG Badges: Highly Voted Relative Date: 3 years, 9 months ago Absolute Date: Tue 07 Jun 2022 13:29 Selected Answer: - Upvotes: 5

Was on exam 7 Jun 2022

Comment 3

ID: 1234480 User: HaraTadahisa Badges: Most Recent Relative Date: 1 year, 8 months ago Absolute Date: Fri 21 Jun 2024 16:38 Selected Answer: A Upvotes: 1

"add the new images and labels to the existing model. You retrain the model, and then publish the model." is correct. HipHop!

Comment 4

ID: 984556 User: james2033 Badges: - Relative Date: 2 years, 6 months ago Absolute Date: Fri 18 Aug 2023 16:01 Selected Answer: A Upvotes: 1

add the new images and labels to the existing model.

You retrain the model,

and then publish the model.

--> Perfect.

Comment 5

ID: 633030 User: Eltooth Badges: - Relative Date: 3 years, 7 months ago Absolute Date: Mon 18 Jul 2022 15:01 Selected Answer: A Upvotes: 1

A is correct answer : Yes

Instead the model need to be extended and retrained (Udemy answer).
Note: Use Smart Labeler to generate suggested tags for images. This lets you label a large number of images more quickly when training a Custom Vision model.

Comment 6

ID: 628319 User: ppo12 Badges: - Relative Date: 3 years, 8 months ago Absolute Date: Thu 07 Jul 2022 12:46 Selected Answer: - Upvotes: 1

Looks good to me!

89. AI-102 Topic 4 Question 19

Sequence
302
Discussion ID
135947
Source URL
https://www.examtopics.com/discussions/microsoft/view/135947-exam-ai-102-topic-4-question-19-discussion/
Posted By
-
Posted At
March 13, 2024, 6:10 p.m.

Question

You have an app named App1 that uses a custom Azure AI Document Intelligence model to recognize contract documents.

You need to ensure that the model supports an additional contract format. The solution must minimize development effort.

What should you do?

  • A. Lower the confidence score threshold of App1.
  • B. Create a new training set and add the additional contract format to the new training set. Create and train a new custom model.
  • C. Add the additional contract format to the existing training set. Retrain the model.
  • D. Lower the accuracy threshold of App1.

Suggested Answer

C

Answer Description Click to expand


Community Answer Votes

Comments 4 comments Click to expand

Comment 1

ID: 1229837 User: reigenchimpo Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Thu 13 Jun 2024 15:28 Selected Answer: C Upvotes: 2

It is obvious that C is the correct solution.

Comment 2

ID: 1218686 User: reiwanotora Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Sun 26 May 2024 04:40 Selected Answer: C Upvotes: 1

It must be C.

Comment 3

ID: 1191614 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Mon 08 Apr 2024 15:58 Selected Answer: C Upvotes: 3

Given answer C is correct

Comment 4

ID: 1180142 User: Murtuza Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Fri 22 Mar 2024 16:51 Selected Answer: - Upvotes: 2

Add the additional contract format to the existing training set and retrain the model: This is a more efficient solution. By augmenting the existing training data with the new format, you can fine-tune the model without starting from scratch. Retraining the model with the updated dataset should help it adapt to the additional format

90. AI-102 Topic 1 Question 35

Sequence
312
Discussion ID
82103
Source URL
https://www.examtopics.com/discussions/microsoft/view/82103-exam-ai-102-topic-1-question-35-discussion/
Posted By
momentumhd
Posted At
Sept. 14, 2022, 9:46 a.m.

Question

SIMULATION -
You plan to create a solution to generate captions for images that will be read from Azure Blob Storage.
You need to create a service in Azure Cognitive Services for the solution. The service must be named captions12345678 and must use the Free pricing tier.
To complete this task, sign in to the Azure portal.

Suggested Answer

image
Answer Description Click to expand


Comments 9 comments Click to expand

Comment 1

ID: 712168 User: halfway Badges: Highly Voted Relative Date: 3 years, 4 months ago Absolute Date: Sun 06 Nov 2022 06:59 Selected Answer: - Upvotes: 10

Create a 'Computer Vision' service and use it for image captioning:

Comment 2

ID: 1173236 User: GHill1982 Badges: Highly Voted Relative Date: 1 year, 12 months ago Absolute Date: Thu 14 Mar 2024 09:28 Selected Answer: - Upvotes: 5

I think the steps are:
1. Create a Computer Vision resource in Azure AI services using the Free pricing tier.
2. Launch Vision Studio.
3. Select your Computer Vision resource.
4. Add new dataset.
5. Enter a dataset name and select Image classification as the model type.
6. Select an Azure blob storage container and Create dataset.
(I don't know in the exam simulation whether the Storage account with a Blob container already exists, if not this would need to be created first)

Comment 2.1

ID: 1178662 User: Ody Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Wed 20 Mar 2024 21:11 Selected Answer: - Upvotes: 1

I tend to agree, but think the question is unclear.

Comment 3

ID: 1217621 User: nanaw770 Badges: Most Recent Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:53 Selected Answer: - Upvotes: 2

Simulation questions will not appear on the actual exam as of May 25, 2024; ET should remove this type of question. Ochinchin.

Comment 4

ID: 914742 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sun 04 Jun 2023 16:36 Selected Answer: - Upvotes: 2

I simulated this in my Azure Portal:
1. Create Cognitive Search Resource
2. Create a Blob Storage. Add a container and add the images with text.
3. In Cognitive Search, click Import Data then choose Data Source: Azure Blob Storage. Link the Blob Storage Connection String to the Container in #2.
4. In the Skillset Click Enable OCR then Click Image Cognitive Skills
5. Click Create till the very end and you are done

Comment 4.1

ID: 978191 User: propanther Badges: - Relative Date: 2 years, 7 months ago Absolute Date: Fri 11 Aug 2023 03:44 Selected Answer: - Upvotes: 5

Is simulation part of the exam?

Comment 4.1.1

ID: 1077514 User: RupRizal Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Wed 22 Nov 2023 17:00 Selected Answer: - Upvotes: 2

I want to know if Simulation is part of actual exam! Anyone?

Comment 4.1.1.1

ID: 1164253 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 18:41 Selected Answer: - Upvotes: 2

It was not on 2-Mar-2024

Comment 5

ID: 668731 User: momentumhd Badges: - Relative Date: 3 years, 5 months ago Absolute Date: Wed 14 Sep 2022 09:46 Selected Answer: - Upvotes: 1

Should we use Cognitive Search for this?

91. AI-102 Topic 1 Question 39

Sequence
313
Discussion ID
96461
Source URL
https://www.examtopics.com/discussions/microsoft/view/96461-exam-ai-102-topic-1-question-39-discussion/
Posted By
dev2dev
Posted At
Jan. 22, 2023, 10:05 a.m.

Question

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You plan to build an API that will identify whether an image includes a Microsoft Surface Pro or Surface Studio.
You need to deploy a service in Azure Cognitive Services for the API. The service must be named AAA12345678 and must be in the East US Azure region. The solution must use the Free pricing tier.
To complete this task, sign in to the Azure portal.

Suggested Answer

image
Answer Description Click to expand


Comments 8 comments Click to expand

Comment 1

ID: 784098 User: dev2dev Badges: Highly Voted Relative Date: 3 years, 1 month ago Absolute Date: Sun 22 Jan 2023 10:05 Selected Answer: - Upvotes: 7

Computer Vision should be the resource we need to create.

Comment 2

ID: 1217616 User: nanaw770 Badges: Most Recent Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:51 Selected Answer: - Upvotes: 3

Simulation questions will not appear on the actual exam as of May 25, 2024; ET should remove this type of question.

Comment 3

ID: 1200463 User: michaelmorar Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Tue 23 Apr 2024 05:59 Selected Answer: - Upvotes: 1

Both are Microsoft Products - so we don't need Brand recognition and it seems more like a Custom Vision solution.

HOWEVER, Computer Vision now seems to offer Product Recognition.

So, while I'm still leaning towards Custom Vision, it seems that Computer Vision might also be viable.

Question is probably out of date.

Comment 4

ID: 1173302 User: GHill1982 Badges: - Relative Date: 1 year, 12 months ago Absolute Date: Thu 14 Mar 2024 10:57 Selected Answer: - Upvotes: 3

I think this is Custom Vision too. I don't believe Computer Vision is able to detect the difference between the two different devices which both have the Microsoft brand logo.

Comment 5

ID: 1121396 User: dimsok Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Sat 13 Jan 2024 08:57 Selected Answer: - Upvotes: 2

I think it is a Custom Vision project because it is not about recognizing a brand, its a about two laptops that need to be identified among tens of others....

Comment 6

ID: 1077556 User: ccampagna Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Wed 22 Nov 2023 17:36 Selected Answer: - Upvotes: 2

This is a Computer Vision task, in computer vision you can extract the logo brands to identify if there is any. It also let you extract text to be analyzed if is necessary. Custom Vision is useful to detect custom objects within an image or the image itself. More info here: https://stackoverflow.com/questions/52155632/difference-between-computer-vision-api-and-custom-vision-api

Comment 7

ID: 917325 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Wed 07 Jun 2023 17:02 Selected Answer: - Upvotes: 2

1. create cognitive service east-us, free
2. go to custom vision ai and choose #1
3. uploaded images of surface pro or studio
4. train the model
5. test
6. publish

Comment 7.1

ID: 1088224 User: AnonymousJhb Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Tue 05 Dec 2023 07:40 Selected Answer: - Upvotes: 2

this is not CUSTOM vision because - Brand detection is a specialized mode of object detection that uses a database of thousands of global logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.

The built-in logo database covers popular brands in consumer electronics, clothing, and more.

ONLY - If you find that the brand you're looking for is not detected by the Azure AI Vision service, you could also try creating and training your own logo detector using the Custom Vision service.

92. AI-102 Topic 1 Question 40

Sequence
314
Discussion ID
87191
Source URL
https://www.examtopics.com/discussions/microsoft/view/87191-exam-ai-102-topic-1-question-40-discussion/
Posted By
halfway
Posted At
Nov. 9, 2022, 3:54 a.m.

Question

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You need to build an API that uses the service in Azure Cognitive Services named AAA12345678 to identify whether an image includes a Microsoft Surface Pro or
Surface Studio.
To achieve this goal, you must use the sample images in the C:\Resources\Images folder.
To complete this task, sign in to the Azure portal.

Suggested Answer

image
Answer Description Click to expand


Comments 8 comments Click to expand

Comment 1

ID: 838020 User: t_isk Badges: Highly Voted Relative Date: 2 years, 12 months ago Absolute Date: Mon 13 Mar 2023 16:18 Selected Answer: - Upvotes: 10

Is it possible to get questions like this on the exam?

Comment 1.1

ID: 1124104 User: ArminZ11 Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Tue 16 Jan 2024 11:43 Selected Answer: - Upvotes: 13

End of Dec 2023, no simulations in the exam

Comment 1.1.1

ID: 1164256 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 18:42 Selected Answer: - Upvotes: 6

Beginning of March 2024, no simulations in the exam

Comment 2

ID: 1217618 User: nanaw770 Badges: Most Recent Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:51 Selected Answer: - Upvotes: 2

Simulation questions will not appear on the actual exam as of May 25, 2024; ET should remove this type of question.

Comment 3

ID: 1077555 User: ccampagna Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Wed 22 Nov 2023 17:36 Selected Answer: - Upvotes: 1

This is a Computer Vision task, in computer vision you can extract the logo brands to identify if there is any. It also let you extract text to be analyzed if is necessary. Custom Vision is useful to detect custom objects within an image or the image itself. More info here: https://stackoverflow.com/questions/52155632/difference-between-computer-vision-api-and-custom-vision-api

Comment 4

ID: 914779 User: ziggy1117 Badges: - Relative Date: 2 years, 9 months ago Absolute Date: Sun 04 Jun 2023 17:34 Selected Answer: - Upvotes: 2

steps are:
1. create cognitive resource
2. go to https://www.customvision.ai/.
3. upload images of each item. 5 minimum
4. train
5. verify via Quick test
6. publish

Comment 5

ID: 837048 User: marti_tremblay000 Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Mar 2023 14:30 Selected Answer: - Upvotes: 2

as per ChatGPT :
For identifying whether an image includes a Microsoft Surface Pro or Surface Studio, you can use the Azure Custom Vision service. Azure Custom Vision is a machine learning service that enables you to build, train, and deploy custom image classification models.

To use Azure Custom Vision for your API, you can follow these general steps:

Create an Azure Custom Vision project and upload a set of labeled images that include Microsoft Surface Pro or Surface Studio.

Train the model using the labeled images.

Publish the trained model as an API endpoint that can be used to classify new images.

Integrate the API endpoint into your own application to provide the image classification service.

Comment 6

ID: 714241 User: halfway Badges: - Relative Date: 3 years, 4 months ago Absolute Date: Wed 09 Nov 2022 03:54 Selected Answer: - Upvotes: 1

Image tagging won't work. It is either 'Brand Detection', if the sample images contain actual surface logos or 'Custom Vision' to train a model to detect surface pro/studios

93. AI-102 Topic 1 Question 41

Sequence
315
Discussion ID
125786
Source URL
https://www.examtopics.com/discussions/microsoft/view/125786-exam-ai-102-topic-1-question-41-discussion/
Posted By
rdemontis
Posted At
Nov. 11, 2023, 5:56 p.m.

Question

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You need to get insights from a video file located in the C:\Resources\Video\Media.mp4 folder.
Save the insights to the C:\Resources\Video\Insights.json folder.
To complete this task, sign in to the Azure Video Analyzer for Media at https://www.videoindexer.ai/ by using [email protected]

Suggested Answer

image
Answer Description Click to expand


Comments 1 comment Click to expand

Comment 1

ID: 1217599 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:38 Selected Answer: - Upvotes: 2

Simulation questions will not appear on the actual exam as of May 25, 2024; ET should remove this type of question.

94. AI-102 Topic 1 Question 42

Sequence
316
Discussion ID
102407
Source URL
https://www.examtopics.com/discussions/microsoft/view/102407-exam-ai-102-topic-1-question-42-discussion/
Posted By
marti_tremblay000
Posted At
March 12, 2023, 2:33 p.m.

Question

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You plan to analyze stock photography and automatically generate captions for the images.
You need to create a service in Azure to analyze the images. The service must be named caption12345678 and must be in the East US Azure region. The solution must use the Free pricing tier.
In the C:\Resources\Caption\Params.json folder, enter the value for Key 1 and the endpoint for the new service.
To complete this task, sign in to the Azure portal.

Suggested Answer

image
Answer Description Click to expand


Comments 2 comments Click to expand

Comment 1

ID: 1217615 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:51 Selected Answer: - Upvotes: 1

Simulation questions will not appear on the actual exam as of May 25, 2024; ET should remove this type of question.

Comment 2

ID: 837050 User: marti_tremblay000 Badges: - Relative Date: 3 years ago Absolute Date: Sun 12 Mar 2023 14:33 Selected Answer: - Upvotes: 3

create a Computer Vision service

95. AI-102 Topic 1 Question 43

Sequence
317
Discussion ID
91127
Source URL
https://www.examtopics.com/discussions/microsoft/view/91127-exam-ai-102-topic-1-question-43-discussion/
Posted By
halfway
Posted At
Dec. 12, 2022, 8:53 a.m.

Question

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: [email protected] -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You plan to build an application that will use caption12345678. The application will be deployed to a virtual network named VNet1.
You need to ensure that only virtual machines on VNet1 can access caption12345678.
To complete this task, sign in to the Azure portal.

Suggested Answer

image
Answer Description Click to expand


Comments 8 comments Click to expand

Comment 1

ID: 1217617 User: nanaw770 Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:51 Selected Answer: - Upvotes: 2

Simulation questions will not appear on the actual exam as of May 25, 2024; ET should remove this type of question.

Comment 2

ID: 1200467 User: michaelmorar Badges: - Relative Date: 1 year, 10 months ago Absolute Date: Tue 23 Apr 2024 06:08 Selected Answer: - Upvotes: 3

Networking > Firewalls and virtual networks > Selected networks and private endpoints > Add existing virtual network

Comment 3

ID: 1132481 User: AnonymousJhb Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Fri 26 Jan 2024 12:51 Selected Answer: - Upvotes: 3

This is a Custom vision resource > go to networking > you only need to configure the vnet1 only access.
- there is no mention to force traffic over the backbone only and deny internet bound traffic > which would require you then to go another step and enable private endpoints. (unless this question is incomplete, so practice both steps for fun)

Comment 4

ID: 1070231 User: AnonymousJhb Badges: - Relative Date: 2 years, 3 months ago Absolute Date: Tue 14 Nov 2023 11:27 Selected Answer: - Upvotes: 1

portal > web app > networking > access restriction >
#1 Inbound Traffic > private endpoints > add > express > name + subs + vnet + subnet + private DNS > OK,
#2 Inbound Traffic > access restriction > uncheck Allow public access = uncheck > Save,
#3 Verify > Inbound Traffic > Access Restriction ON > Private Endpoint > ON

Comment 4.1

ID: 1132482 User: AnonymousJhb Badges: - Relative Date: 2 years, 1 month ago Absolute Date: Fri 26 Jan 2024 12:51 Selected Answer: - Upvotes: 1

ignore this. there is no delete option :(

Comment 5

ID: 962063 User: bull13 Badges: - Relative Date: 2 years, 7 months ago Absolute Date: Mon 24 Jul 2023 23:08 Selected Answer: - Upvotes: 1

Select the resource -> Resource Management -> Networking -> Selected Networks and Private Endpoints

Comment 6

ID: 823746 User: dalones213 Badges: - Relative Date: 3 years ago Absolute Date: Mon 27 Feb 2023 14:46 Selected Answer: - Upvotes: 1

Create a virtual network by selecting Firewall and virtual networks

Comment 7

ID: 742526 User: halfway Badges: - Relative Date: 3 years, 3 months ago Absolute Date: Mon 12 Dec 2022 08:53 Selected Answer: - Upvotes: 3

Private endpoint is not the answer. Configure the service to allow access from selected network instead: https://learn.microsoft.com/en-us/azure/cognitive-services/cognitive-services-virtual-networks?tabs=portal#managing-virtual-network-rules

96. AI-102 Topic 1 Question 47

Sequence
319
Discussion ID
102671
Source URL
https://www.examtopics.com/discussions/microsoft/view/102671-exam-ai-102-topic-1-question-47-discussion/
Posted By
RAN_L
Posted At
March 15, 2023, 10:37 a.m.

Question

DRAG DROP
-

You have a Custom Vision service project that performs object detection. The project uses the General domain for classification and contains a trained model.

You need to export the model for use on a network that is disconnected from the internet.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

image

Suggested Answer

image
Answer Description Click to expand


Comments 10 comments Click to expand

Comment 1

ID: 845710 User: jimbojambo Badges: Highly Voted Relative Date: 2 years, 11 months ago Absolute Date: Tue 21 Mar 2023 10:20 Selected Answer: - Upvotes: 25

The provided answer is correct. As reported here
https://learn.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/export-your-model
the model must be retrained after changing the domain to compact.

Comment 2

ID: 1067013 User: Prodyna Badges: Highly Voted Relative Date: 2 years, 4 months ago Absolute Date: Fri 10 Nov 2023 08:22 Selected Answer: - Upvotes: 5

was on november exam

Comment 3

ID: 1217614 User: nanaw770 Badges: Most Recent Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 16:49 Selected Answer: - Upvotes: 2

1. Change domain to General(Compact) domain
2. Retrain model using new domain
3. Export model to desired export format

Comment 4

ID: 1217341 User: funny_penguin Badges: - Relative Date: 1 year, 9 months ago Absolute Date: Fri 24 May 2024 11:22 Selected Answer: - Upvotes: 2

on exam today 24/05/2024, zellck's response I selected as well. general compact, retrain, export

Comment 5

ID: 1164260 User: audlindr Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 18:43 Selected Answer: - Upvotes: 2

This was in 2-Mar-24 exam

Comment 6

ID: 978569 User: propanther Badges: - Relative Date: 2 years, 7 months ago Absolute Date: Fri 11 Aug 2023 13:25 Selected Answer: - Upvotes: 2

Correct sequence of answer is:
1. Change domain to General(Compact) domain
2. Retrain model using new domain
3. Export model to desired export format

Ref: https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/export-your-model

Comment 7

ID: 936138 User: Pixelmate Badges: - Relative Date: 2 years, 8 months ago Absolute Date: Wed 28 Jun 2023 07:15 Selected Answer: - Upvotes: 3

This appeared in the exam 28/06

Comment 8

ID: 839742 User: RAN_L Badges: - Relative Date: 2 years, 12 months ago Absolute Date: Wed 15 Mar 2023 10:37 Selected Answer: - Upvotes: 1

This sequence of actions is not correct. Changing the domain to General (compact) before retraining the model may result in reduced accuracy and performance, as the model would not be optimized for the specific domain it is intended to be used in.

Therefore, the correct sequence of actions should start with retraining the model to optimize it for the intended use case, followed by changing the domain to General (compact) to create a more compact version of the model, and then exporting it for use on a disconnected network.

So the correct sequence is:

Retrain the model
Change Domains to General (compact)
Export the model

Comment 8.1

ID: 890027 User: Rob77 Badges: - Relative Date: 2 years, 10 months ago Absolute Date: Fri 05 May 2023 14:34 Selected Answer: - Upvotes: 2

Unlikely.
1 - the model is already trained and 2 - after changing the domain you have to retrain the model see jimbo's link above...

Comment 8.2

ID: 1163845 User: Mehe323 Badges: - Relative Date: 2 years ago Absolute Date: Sat 02 Mar 2024 01:10 Selected Answer: - Upvotes: 2

In Jimbo's link, the retraining is the last sub step in the 'Convert to a compact domain' step. The step before that is Changing domains. So the answer given by ExamTopics is correct.

97. AI-102 Topic 3 Question 4

Sequence
321
Discussion ID
56454
Source URL
https://www.examtopics.com/discussions/microsoft/view/56454-exam-ai-102-topic-3-question-4-discussion/
Posted By
azurelearner666
Posted At
June 30, 2021, 7:09 p.m.

Question

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You develop an application to identify species of flowers by training a Custom Vision model.
You receive images of new flower species.
You need to add the new images to the classifier.
Solution: You create a new model, and then upload the new images and labels.
Does this meet the goal?

  • A. Yes
  • B. No

Suggested Answer

B

Answer Description Click to expand


Community Answer Votes

Comments 9 comments Click to expand

Comment 1

ID: 1040053 User: sl_mslconsulting Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 11 Apr 2024 03:47 Selected Answer: B Upvotes: 1

The answer is B is because the limitations of the smart labeler: You should only request suggested tags for images whose tags have already been trained on once. Don't get suggestions for a new tag that you're just beginning to train. You are given new images of species that have not been seen by the model how can you expect it to suggest what they are? Also you can train the model right in the smart labeler: check the workflow and the limitations in the doc. https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/suggested-tags

Comment 1.1

ID: 1040054 User: sl_mslconsulting Badges: - Relative Date: 1 year, 11 months ago Absolute Date: Thu 11 Apr 2024 03:49 Selected Answer: - Upvotes: 1

Oops I meant to answer the question 2 above this one.

Comment 2

ID: 984560 User: james2033 Badges: - Relative Date: 2 years ago Absolute Date: Sun 18 Feb 2024 17:04 Selected Answer: B Upvotes: 1

Need training. Correct answer: No

Comment 3

ID: 633031 User: Eltooth Badges: - Relative Date: 3 years, 1 month ago Absolute Date: Wed 18 Jan 2023 16:02 Selected Answer: B Upvotes: 1

B is correct answer : No.

The model needs to be extended and retrained. (Udemy answer)

Note: Use Smart Labeler to generate suggested tags for images. This lets you label a large number of images more quickly when training a Custom Vision model.

Comment 4

ID: 449361 User: htolajide Badges: - Relative Date: 3 years, 11 months ago Absolute Date: Tue 22 Mar 2022 11:05 Selected Answer: - Upvotes: 4

Answer is correct, no need to create a new model, the existing one should be extended and retrained

Comment 5

ID: 398463 User: Rdninja Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Tue 04 Jan 2022 18:11 Selected Answer: - Upvotes: 1

You don't need to retrain because you created a brand new model

Comment 5.1

ID: 412441 User: Messatsu Badges: - Relative Date: 4 years, 1 month ago Absolute Date: Sun 23 Jan 2022 13:10 Selected Answer: - Upvotes: 6

No. If "You create a new model, and then upload the new images and labels." your model lacks previous images of other flowers. So the answer is correct.

Comment 5.1.1

ID: 415295 User: YipingRuan Badges: - Relative Date: 4 years, 1 month ago Absolute Date: Thu 27 Jan 2022 11:24 Selected Answer: - Upvotes: 1

If must, Create and upload the new model, not upload the image..

Comment 6

ID: 394990 User: azurelearner666 Badges: - Relative Date: 4 years, 2 months ago Absolute Date: Thu 30 Dec 2021 20:09 Selected Answer: - Upvotes: 3

correct!
response lacks the model retraining...