diff --git a/066-OpenAIFundamentals/.wordlist.txt b/066-OpenAIFundamentals/.wordlist.txt index b3b2cc482f..c080eb5109 100644 --- a/066-OpenAIFundamentals/.wordlist.txt +++ b/066-OpenAIFundamentals/.wordlist.txt @@ -42,3 +42,4 @@ multimodal Agentic MCP Leaderboards +Gally diff --git a/066-OpenAIFundamentals/Coach/Solution-00.md b/066-OpenAIFundamentals/Coach/Solution-00.md index d9e2b9293f..cc3f5493b8 100644 --- a/066-OpenAIFundamentals/Coach/Solution-00.md +++ b/066-OpenAIFundamentals/Coach/Solution-00.md @@ -6,17 +6,17 @@ Challenge-00 is all about helping the student set up the prerequisites for this hack. This includes necessary installations, environment options, and other libraries needed. -They will be creating all needed Azure resources through the Azure AI Foundry. Once they create a hub, they will have an AOAI, AI Search, Azure Document Intelligence, and Azure Storage Account deployed. They will get the credentials for AOAI, AI Search, and Document Intelligence through the AI Foundry. For Azure Storage, they will need to navigate to the Azure Portal. +They will be creating all needed Azure resources through Microsoft Foundry. If they use the deployment script, all of the required Azure resources will be created for them. Otherwise, they will need to create the required resources in the Azure Portal and the Foundry portal. Once they create a project, they will have an AOAI, AI Search, Azure Document Intelligence, and Azure Storage Account deployed. They will get the credentials for AOAI, AI Search, and Document Intelligence through Microsoft Foundry. For Azure Storage, they will need to navigate to the Azure Portal if they are doing it manually. **NOTE:** Target | Endpoint | Base can be used interchangeably. -**NOTE:** For all of the challenges, if a student changes any variables in their .env file, they will need to re-run those cells that load the .env file and set the variables in Python. They can check the values of their Jupyter variables by clicking the Jupyter tab in Visual Studio Code. +**NOTE:** For all of the challenges, if a student changes any variables in their .env file, they will need to re-run those cells that load the .env file and set the variables in Python. They can check the values of their Jupyter variables by clicking the Jupyter tab in Visual Studio Code. If Visual Studio Code hangs during cell execution, they will need to restart the kernel which means they will need to re-run any cells that load the .env file and set the variables in Python again. ### Model Deployment Challenge 0 has the students deploy multiple models that will be used for the following: - One model will be used by the Jupyter notebooks for Challenges 1, 3, 4, & 5. The notebooks expect to find the name of this model in the `CHAT_MODEL_NAME` value of the `.env` file. -- A second model will be used in Challenge 2 for model comparison with the Leaderboards in Azure AI Foundry. +- A second model will be used in Challenge 2 for model comparison with the Leaderboards in Microsoft Foundry. - A text embedding model will be used in the Jupyter notebooks for Challenges 3 & 4. The notebooks expect to find the name of this model in the `EMBEDDING_MODEL_NAME` value of the `.env` file. Students can use different/newer models than the ones listed in the student guide when this hack was published. Most models should work fine. Just ensure the values set in the `.env` file to match the names of the models deployed. diff --git a/066-OpenAIFundamentals/Coach/Solution-02.md b/066-OpenAIFundamentals/Coach/Solution-02.md index 0da1824c17..1d7d4c169f 100644 --- a/066-OpenAIFundamentals/Coach/Solution-02.md +++ b/066-OpenAIFundamentals/Coach/Solution-02.md @@ -28,7 +28,7 @@ Some possible model choices include: 1. GPT-4o and GPT-4o Mini 2. GPT-4o and GPT-5 Mini -For Model Router, students will be deploying an instance of model router in AI Foundry and prompting it with different questions in the chat playground to see how the queries are automatically sent to the different LLMs in depending on their complexity. +For Model Router, students will be deploying an instance of model router in Microsoft Foundry and prompting it with different questions in the chat playground to see how the queries are automatically sent to the different LLMs in depending on their complexity. The router may choose the model **`gpt-5-nano-2025-08-07`** consistently for the given prompts. This model is known for its ultra low latency and fast responses for simple tasks. Encourage students to try longer, multi-step reasoning prompts to trigger a different model. diff --git a/066-OpenAIFundamentals/Coach/Solution-04.md b/066-OpenAIFundamentals/Coach/Solution-04.md index 35787ef4a0..50d6240760 100644 --- a/066-OpenAIFundamentals/Coach/Solution-04.md +++ b/066-OpenAIFundamentals/Coach/Solution-04.md @@ -16,4 +16,4 @@ Known Bugs - **CH4 EXTRACTING FILES: operation returned an invalid status `inefficient storage`** \ **Solution:** If some of the files are extracted, students can continue working with those and ignore the error message. - Rate Limits \ - **Solution:** Have students go into their model deployments in the AI Studio and adjust the TPM allocation for that model to a higher value using the toggle. + **Solution:** Have students go into their model deployments in the Microsoft Foundry and adjust the TPM allocation for that model to a higher value using the toggle. diff --git a/066-OpenAIFundamentals/Student/Challenge-00.md b/066-OpenAIFundamentals/Student/Challenge-00.md index eb18e4340a..825345077e 100644 --- a/066-OpenAIFundamentals/Student/Challenge-00.md +++ b/066-OpenAIFundamentals/Student/Challenge-00.md @@ -12,7 +12,7 @@ In this challenge, you will set up the necessary prerequisites and environment t - [Setup Jupyter Notebook Environment](#setup-jupyter-notebook-environment) - [GitHub Codespaces](#setup-github-codespace) - [Local Workstation](#setup-local-workstation) -- [Deploy Azure AI Foundry Resources](#deploy-azure-ai-foundry-resources) +- [Deploy Microsoft Foundry Resources](#deploy-microsoft-foundry-resources) ### Azure Subscription @@ -60,7 +60,7 @@ Your Codespace environment should load in a new browser tab. It will take approx - When the codespace completes loading, you should find an instance of Visual Studio Code running in your browser with the files needed for this hackathon. -You are ready to run the Jupyter Notebook files, hooray! Skip to section: [Setup Azure AI Foundry Project and Hub](#Setup-Azure-AI-Foundry-Project-and-Hub) +You are ready to run the Jupyter Notebook files, hooray! Skip to section: [Setup Microsoft Foundry Project](#Setup-Microsoft-Foundry-Project) **NOTE:** If you close your Codespace window, or need to return to it later, you can go to [GitHub Codespaces](https://github.com/codespaces) and you should find your existing Codespaces listed with a link to re-launch it. @@ -134,20 +134,20 @@ Once you have an Azure Machine Learning Studio Workspace set up, you can upload
-### Deploy Azure AI Foundry Resources +### Deploy Microsoft Foundry Resources Now that you have a Jupyter notebook environment setup, you need to: -- Deploy AI models and resources in Azure AI Foundry. +- Deploy AI models and resources in Microsoft Foundry. - Setup Jupyter Notebooks Configuration File -We have provided an automation script that will perform these tasks for you. However, you may wish to complete these tasks manually to become more familiar with Azure AI Foundry. +We have provided an automation script that will perform these tasks for you. However, you may wish to complete these tasks manually to become more familiar with Microsoft Foundry. -- [Automate Azure AI Foundry Deployment](#automate-azure-ai-foundry-deployment) -- [Manual Azure AI Foundry Deployment](#manual-azure-ai-foundry-deployment) +- [Automate Microsoft Foundry Deployment](#automate-microsoft-foundry-deployment) +- [Manual Microsoft Foundry Deployment](#manual-microsoft-foundry-deployment) **NOTE:** If you are limited on time, we recommend using the automation script option. -#### Automate Azure AI Foundry Deployment +#### Automate Microsoft Foundry Deployment We have provided a deployment script and a set of Bicep templates which will deploy and configure the Azure AI resources which you will use for this hackathon. You can find these files in the `/infra` folder of your Codespace or the student `Resources.zip` package. @@ -174,42 +174,38 @@ cd infra chmod +x deploy.sh ./deploy.sh ``` -**NOTE:** By default, the script will create an Azure resource group for you named `rg-ai-foundry-secure`. You may optionally specify a `resourceGroupName` and/or `location` parameters if you need the resources deployed to a specific resource group or region. The default location is "`eastus`" if you don't specify one. +**NOTE:** By default, the script will create an Azure resource group for you named `rg-microsoft-foundry-secure`. You may optionally specify a `resourceGroupName` and/or `location` parameters if you need the resources deployed to a specific resource group or region. The default location is "`eastus`" if you don't specify one. ``` ./deploy.sh --resourceGroupName "[resource-group-name]" --location "[location]" ``` -#### Manual Azure AI Foundry Deployment +#### Manual Microsoft Foundry Deployment -**NOTE:** You can skip this section if you chose to automate the deployment. +**NOTE:** You can skip this section if you chose to automate the deployment. It is strongly recommended that you use the automated approach. If you'd like to understand more what the automated approach is doing, you can use GitHub Copilot to explain what the deployment script and associated Bicep files are doing. -If you want to deploy the Azure AI Foundry resources, expand the section below and follow instructions there. +If you want to deploy the Microsoft Foundry resources, expand the section below and follow instructions there.
Click to expand/collapse Manual Deployment Instructions -#### Setup Azure AI Foundry Project and Hub +#### Setup Azure Microsoft Foundry Project -Navigate to [AI Foundry](https://ai.azure.com) to create your Azure AI project and the needed resources. A project is used to organize your work and allows you to collaborate with others. A hub provides the hosting environment for your projects. An Azure AI hub can be used across multiple projects. +Navigate to [Microsoft Foundry](https://ai.azure.com) to create your Microsoft Foundry project. + +- Click on the **+ Create New** button. +- Choose Microsoft Foundry resource for the resource type. Click the **Next** button + - Fill out a name for your project. **Note:** You should not need to specify Advanced Options unless you need or want to change the region because of capacity constraints. Click the **Create** button +- From the Azure portal (or you can use an Infrastructure as Code approach if you prefer using Bicep/Terraform/ARM/CLI) + - Create an Azure AI Search service + - Specify a service name for your Azure AI Search. You can use the same resource group and location as the Microsoft Foundry resource. **Note:** Make sure you set the Pricing Tier to Standard (Basic/Free is not supported) -- Click on the **+ Create Project** button. -- Give your project a name and click **Create a new hub**. - - Fill out a name for your hub. - - Click the **Next** button - - Click the **Customize** button - - Click **Create new AI Search**. - - Fill out a name for your Azure AI Search - - Click the **Next** button to finish setting up your Azure AI Search - - Click the **Next** button on the screen where it says **Create a hub for your projects** - - On the Review and Finish page, click the **Create** button -- The hub will create an Azure Open AI, Azure Blob, and an AI Service resource for you once it is finished. Resources are different Azure services you will use within the challenges. #### Deploy Azure OpenAI Models Now we will deploy the needed large language models from Azure OpenAI. -- Navigate to the [AI Foundry](https://ai.azure.com) +- Navigate to the [Microsoft Foundry](https://ai.azure.com) - On the left navigation bar, under My Assets, click on Models + endpoints. Click the Deploy Model button and select Deploy base model - Deploy the following 3 models in your Azure OpenAI resource. - `gpt-4o` @@ -226,25 +222,13 @@ You will find the `.env.sample` file in the root of the codespace. If you are wo - Rename the file from `.env.sample` to `.env`. - Add all the required Azure resource credentials in the `.env` file. This includes: Azure OpenAI, model deployments, AI Search, Azure Document Intelligence, and Azure Blob - - For **Azure OpenAI and Model Deployments**, you can find these credentials in Azure AI Foundry: - - Navigate to the [AI Foundry](https://ai.azure.com) - - Navigate to your project. In the lower left corner, click on the link to Management Center. It is also under Project details. - - Click on Connected resources under your project - - Click the name of your Azure OpenAI Service to see its details. Copy the Target URL and API Key for `OPENAI_API_BASE` and `OPEN_API_KEY`, respectively into the `.env` file - - From the **`Manage connect resources in this project`** screen, click the Name with the type **`AIServices`**. The AI Services deployment is a multi-service resource that allows you to access multiple Azure AI services like Document Intelligence with a single key and endpoint. Copy the Target URL and the API Key for `AZURE_DOC_INTELLIGENCE_ENDPOINT` and `AZURE_DOC_INTELLIGENCE_KEY`, respectively into the `.env` file - - In the [Azure Portal](portal.azure.com), navigate to the resource group you made when creating your hub within the AI Foundry. - - Locate your **AI Search** service that you created earlier - - From the **Overview**, copy the URL for `AZURE_AI_SEARCH_ENDPOINT` in the .env file - - Under **`Settings`** go to Keys, copy the admin key into `AZURE_AI_SEARCH_KEY` in the `.env` file - - Model deployment names should be the same as the ones populated in the `.env.sample` file especially if you have deployed a different model due to quota issues. - - For **Azure Blob**, you can find these credentials in the [Azure Portal](portal.azure.com). - - In the Azure Portal, navigate to the resource group you made when creating your hub within the AI Foundry. - - Click on your **`Storage account`** resource - - Click on **`Security + networking`** and find **`Access keys`**. You should be able to see the **`Storage account name`**, **`key`**, and **`Connection string`**. + - For **Azure OpenAI and Model Deployments**, you can find these credentials in Azure Microsoft Foundry: + - Navigate to the [Microsoft Foundry](https://ai.azure.com) + - You will need the values for `OPENAI_API_BASE`, `AZURE_DOC_INTELLIGENCE_ENDPOINT`, `AZURE_AI_SEARCH_ENDPOINT`, `AZURE_AI_PROJECT_ENDPOINT`, and `AZURE_BLOB_STORAGE_ACCOUNT_NAME` to put in your `.env` file. Use your favorite search tool or Github Copilot to figure out where to retrieve these values either in the Foundry Portal, Azure Portal, or using the Azure CLI. **TIP:** Learn more about using `.env` files [here](https://dev.to/edgar_montano/how-to-setup-env-in-python-4a83#:~:text=How%20to%20setup%20a%20.env%20file%201%201.To,file%20using%20the%20following%20format%3A%20...%20More%20items). -**NOTE:** We have also provided a `.gitignore` file that should prevent you from accidentally committing your renamed `.env` file to a Git repo during this hack. +**NOTE:** We have also provided a `.gitignore` file that should prevent you from accidentally committing your own `.env` file to a Git repo during this hack. **NOTE:** On MacOS, files that start with a `.` are hidden files and are not viewable in Finder when browsing the file system. They will be visible in both VS Code or GitHub Codespaces. @@ -260,10 +244,10 @@ If using GitHub Codespaces: - Verify you have the following files & folders available in the Codespace: - `/data` - `/notebooks` - - `.env` <= Renamed from `.env.sample` + - `.env` <= Copied from `.env.sample` - `.gitignore` - `requirements.txt` -- Verify that you have created the Project and Hub in your AI Foundry. +- Verify that you have created the Project in Microsoft Foundry. - Verify that you have the following resources: Azure OpenAI, deployed the necessary models, AI Search, Document Intelligence, Azure Blob. If working on a local workstation: @@ -276,7 +260,7 @@ If working on a local workstation: - `.env` <= Renamed from `.env.sample` - `.gitignore` - `requirements.txt` -- Verify that you have created the Project and Hub in your AI Foundry. +- Verify that you have created the Project in your Microsoft Foundry. - Verify that you have the following resources: Azure OpenAI, deployed the necessary models, AI Search, Document Intelligence, Azure Blob. ## Learning Resources diff --git a/066-OpenAIFundamentals/Student/Challenge-01.md b/066-OpenAIFundamentals/Student/Challenge-01.md index 65097dde87..95f8d479ad 100644 --- a/066-OpenAIFundamentals/Student/Challenge-01.md +++ b/066-OpenAIFundamentals/Student/Challenge-01.md @@ -4,8 +4,8 @@ ## Prerequisites -* Ensure you have the needed resources from the previous challenge in [AI Foundry](https://ai.azure.com/) -* Update the `.env.sample` file (and save as `.env`) with your respective resource credentials if you haven't already +* Ensure you have the needed resources from the previous challenge in [Microsoft Foundry](https://ai.azure.com/) +* Update the `.env.sample` file (and save as `.env`) with your respective resource credentials if you haven't already done so. ## Introduction diff --git a/066-OpenAIFundamentals/Student/Challenge-02-Old.md b/066-OpenAIFundamentals/Student/Challenge-02-Old.md deleted file mode 100644 index b4bde945a3..0000000000 --- a/066-OpenAIFundamentals/Student/Challenge-02-Old.md +++ /dev/null @@ -1,92 +0,0 @@ -# Challenge 02 - OpenAI Models & Capabilities - -[< Previous Challenge](./Challenge-01.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-03.md) - -## Introduction - -In this challenge, you will learn about the different capabilities of OpenAI models and learn how to choose the best model for your use case. - -There are a lot of different models available in the Azure AI Model Catalog. These include models from OpenAI and other open source large language models from Meta, Hugging Face, and more. You are going to explore various LLMs and compare gpt3.5 to gpt4 model in this challenge. - -In a world where the availability and development of models are always changing, the models we compare may change over time. But we encourage you to understand the general concepts and material in this Challenge because the comparison techniques utilized can be applicable to scenarios where you are comparing Large and/or Small Language Models. For more information on legacy models and additional models, reference the [documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/legacy-models) and [Azure model catalog](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog-overview) for more details. - -## Description -Questions you should be able to answer by the end of this challenge: -- What are the capacities of each Azure OpenAI model? -- How to select the right model for your application? -- What model would you select to perform complex problem solving? -- What model would you select to generate new names? - -You will work in the Azure AI Foundry for this challenge. We recommend keeping the student guide and the Azure AI Foundry in two windows side by side as you work. This will also help to validate you have met the [success criteria](#success-criteria) below for this challenge. - -This challenge is divided into the following sections: - -- [2.1 Model Discovery](#21-model-discovery) -- [2.2 Model Benchmarking](#22-model-benchmarking) -- [2.3 Model Comparison](#23-model-comparison) -- [2.4 Prompt Flow](#24-prompt-flow) - - 2.4.1 Complex Problem Solving - - 2.4.2 Creative and Technical Writing - - 2.4.3 Long Form Content Understanding - -### 2.1 Model Discovery -Scenario: You are part of a research team working on getting information from biotech news articles. Your goal is to explore the Model Catalog and identify some suitable models for accurate question answering. There is no right or wrong answer here. - -#### Student Task 2.1 -- Go into the [Azure AI Foundry](https://ai.azure.com). -- Navigate to the Model Catalog and explore different models using the correct filters. -- Identify which models can potentially improve the accuracy of the task at hand. - -**HINT:** Take a look at the model cards for each model by clicking into them. Evaluate the models based on their capabilities, limitations, and fit for the use case. Which models seem to be good options for question answering? - -### 2.2 Model Benchmarking -#### Student Task 2.2 -- Use the benchmarking tool and **Compare models** in Foundry to compare the performance of all the selected models you chose from the previous challenge, on industry standard datasets now. -- Leverage the metrics such as accuracy, coherence, and more. -- Recommend the best-performing model for biotech news Q&A. - -### 2.3 Model Comparison -#### Student Task 2.3 -- Navigate to [Github's Model Marketplace](https://github.com/marketplace/models) -- Choose two models to compare. What are your observations? - -### 2.4 Prompt Flow -Scenario: You are a product manager at a multinational tech company, and your team is developing an advanced AI-powered virtual assistant to provide real-time customer support. The company is deciding between GPT-3.5 Turbo and GPT-4 to power the virtual assistant. Your task is to evaluate both models to determine which one best meets the company's needs for handling diverse customer inquiries efficiently and effectively. - -Navigate to the AI Foundry and click on your project. You should be able to see **Prompt flow** under Tools in the navigation bar. Create a new **standard flow** to solve the tasks below and compare the responses from different models. For each task, you will see the provided prompts that you can test against the deployed models. - -**NOTE:** If you get this **User Error: This request is not authorized to perform this operation using this permission. Please grant workspace/registry read access to the source storage account.** when you create a new **standard** flow using the default name, then please append some random characters to the name or create a unique name for your flow. - -**HINT:** Click on the **Prompt Flow** and **Create a Flow** under the [Learning Resources](#learning-resources) for a good overview on Prompt Flow. - -#### Student Task 2.4.1: Complex Problem Solving - Compare the models' abilities to navigate complex customer complaints and provide satisfactory solutions. - - Prompt: "A customer is unhappy with their recent purchase due to a missing feature. Outline a step-by-step resolution process that addresses their concern and offers a satisfactory solution." - - Prompt: "Develop a multi-step troubleshooting guide for customers experiencing issues with their smart home devices, integrating potential scenarios and solutions." - -#### Student Task 2.4.2: Creative and Technical Writing - Assess the models' capabilities in technical writing, such as creating detailed product manuals or help articles. - - Prompt: "Write a product description for a new smartphone that highlights its innovative features in a creative and engaging manner." - - Prompt: "Create a comprehensive FAQ section for a complex software application, ensuring clarity and technical accuracy." - -#### Student Task 2.4.3: Long Form Content Understanding - Provide both models with extensive customer feedback or product reviews and ask them to summarize the key points. - - We have provided a `ch2_1.5_product_review.txt` file that contains a product review for you to use with the given prompt below. You will find the `ch2_1.5_product_review.txt` file in the `/data` folder of the codespace. If you are working on your local workstation, you will find the `ch2_1.5_product_review.txt` file in the `/data` folder of the `Resources.zip` file. Please copy & paste the contents of this file within your prompt. - - Prompt: "Analyze a detailed product review and extract actionable insights that can inform future product development." -## Success Criteria - -To complete this challenge successfully, you should be able to: -- Show an understanding of each model and its suitable use cases -- Show an understanding of differences between models -- Select the most suitable model to apply under different scenarios - -## Learning Resources - -- [Overview of Azure OpenAI Models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/models) -- [Prompt Flow](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow) -- [Create a Flow](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/flow-develop) -- [Tune Variants](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/flow-tune-prompts-using-variants) -- [Azure OpenAI Pricing Page](https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/) -- [Request for Quota Increase](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4xPXO648sJKt4GoXAed-0pURVJWRU4yRTMxRkszU0NXRFFTTEhaT1g1NyQlQCN0PWcu) -- [Customize Models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-studio) diff --git a/066-OpenAIFundamentals/Student/Challenge-02.md b/066-OpenAIFundamentals/Student/Challenge-02.md index 005062af52..3d2b55e943 100644 --- a/066-OpenAIFundamentals/Student/Challenge-02.md +++ b/066-OpenAIFundamentals/Student/Challenge-02.md @@ -19,7 +19,7 @@ Questions you should be able to answer by the end of this challenge: - What model would you select to perform complex problem solving? - What model would you select to generate new names? -You will work in the Azure AI Foundry for this challenge. We recommend keeping the student guide and the Azure AI Foundry in two windows side by side as you work. This will also help to validate you have met the [success criteria](#success-criteria) below for this challenge. +You will work in the Microsoft Foundry for this challenge. We recommend keeping the student guide and the Microsoft Foundry in two windows side by side as you work. This will also help to validate you have met the [success criteria](#success-criteria) below for this challenge. This challenge is divided into the following sections: @@ -36,7 +36,7 @@ This challenge is divided into the following sections: Scenario: You are building a chatbot for a retail company that needs fast responses and safe outputs. Your goal is to explore the Model Catalog and identify models for this use case. There is no right or wrong answer here. #### Student Task 2.1 -- Go into the [Azure AI Foundry](https://ai.azure.com). +- Go into [Microsoft Foundry](https://ai.azure.com). - Navigate to the Model Catalog and explore different models using the correct filters. - Identify which model can potentially help with the task at hand. - Share your findings with a peer and compare your choices. Did you pick the same models? Why or why not? @@ -75,7 +75,7 @@ Scenario: You are a product manager at a multinational tech company, and your te ### 2.4 Model Router #### Student Task 2.4 -- Navigate to AI Foundry and deploy an instance of model router in the same project as your other models +- Navigate to Microsoft Foundry and deploy an instance of model router in the same project as your other models - In Chat Playground use the model router deployment and prompt it with a variety of questions ranging simple to difficult. You can use the sample prompts below or come up with your own! Note how different models are used for each query (you can see this switch in the metadata on top of the prompt). - After trying the below prompts navigate to a browser window and open Copilot. Ask Copilot the pricing for the three different models each query used. Note the price difference for each model. The smart routing is optimizing cost by using light weight models (which are cheaper) for the easier prompts! @@ -105,7 +105,7 @@ To complete this challenge successfully, you should be able to: ## Learning Resources - [Overview of Azure OpenAI Models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/models) -- [Use Model Router for Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/model-router) +- [Use Model Router for Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/model-router) - [Azure OpenAI Pricing Page](https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/) - [Request for Quota Increase](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4xPXO648sJKt4GoXAed-0pURVJWRU4yRTMxRkszU0NXRFFTTEhaT1g1NyQlQCN0PWcu) - [Customize Models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-studio) diff --git a/066-OpenAIFundamentals/Student/Challenge-04.md b/066-OpenAIFundamentals/Student/Challenge-04.md index aa5a1e9c19..0550ae655a 100644 --- a/066-OpenAIFundamentals/Student/Challenge-04.md +++ b/066-OpenAIFundamentals/Student/Challenge-04.md @@ -4,8 +4,8 @@ ## Pre-requisites -- Azure Form Recognizer resource for extracting text from raw unstructured data -- Azure Cognitive Search resource for indexing and retrieving relevant information +- Azure Document Intelligence resource for extracting text from raw unstructured data +- Azure AI Search resource for indexing and retrieving relevant information - Azure OpenAI service for Generative AI Models and Embedding Models - Add required credentials of above resources in `.env` file - Install the required libraries in the `requirements.txt` file via ```pip install -r requirements.txt ``` if you have not already @@ -14,7 +14,7 @@ Knowledge bases are widely used in enterprises and can contain an extensive number of documents across various categories. Retrieving relevant content based on user queries is a challenging task. Traditionally, methods like Page Rank have been employed to accurately retrieve information at the document level. However, users still need to manually search within the document to find the specific and relevant information they need. The recent advancements in Foundation Models, such as the one developed by OpenAI, offer a solution through the use of "Retrieval Augmented Generation" techniques and encoding information like "Embeddings." These methods aid in finding the relevant information and then to answer or summarize the content to present to the user in a concise and succinct manner. -Retrieval augmented generation (RAG) is an innovative approach that combines the power of retrieval-based Knowledge bases, such as Azure Cognitive Search, and generative Large Language Models (LLMs), such as Azure OpenAI ChatGPT, to enhance the quality and relevance of generated outputs. This technique involves integrating a retrieval component into a generative model, enabling the retrieval of contextual and domain-specific information from the knowledge base. By incorporating this contextual knowledge alongside the original input, the model can generate desired outputs, such as summaries, information extraction, or question answering. In essence, the utilization of RAG with LLMs allows you to generate domain-specific text outputs by incorporating specific external data as part of the context provided to the LLMs. +Retrieval augmented generation (RAG) is an innovative approach that combines the power of retrieval-based Knowledge bases, such as Azure AI Search, and generative Large Language Models (LLMs), such as Azure OpenAI ChatGPT, to enhance the quality and relevance of generated outputs. This technique involves integrating a retrieval component into a generative model, enabling the retrieval of contextual and domain-specific information from the knowledge base. By incorporating this contextual knowledge alongside the original input, the model can generate desired outputs, such as summaries, information extraction, or question answering. In essence, the utilization of RAG with LLMs allows you to generate domain-specific text outputs by incorporating specific external data as part of the context provided to the LLMs. RAG aims to overcome limitations found in purely generative models, including issues of factual accuracy, relevance, and coherence, often seen in the form of "hallucinations". By integrating retrieval into the generative process, RAG seeks to mitigate these challenges. The incorporation of retrieved information serves to "ground" the large language models (LLMs), ensuring that the generated content better aligns with the intended context, enhances factual correctness, and produces more coherent and meaningful outputs. @@ -43,28 +43,27 @@ If you are working locally or in the Cloud, you can find them in the `/notebooks To run a Jupyter notebook, navigate to it in your Codespace or open it in VS Code on your local workstation. You will find further instructions for the challenge, as well as in-line code blocks that you will interact with to complete the tasks for the challenge. Return here to the student guide after completing all tasks in the Jupyter notebook to validate you have met the [success criteria](#success-criteria) below for this challenge. -### Exploring RAG in Azure AI Foundry -Try RAG in the Azure AI Foundry portal with unstructured data. -1. This is the prerequisite to the following steps: Navigate to the [Azure Portal](https://portal.azure.com/#home) and find your resource group. Then navigate to the right storage account. On the left navigation, click `Networking`. Under `Firewalls and virtual networks`, select `Enabled from all networks`. -2. Navigate to [Azure AI Foundry](https://ai.azure.com/) and `Playgrounds` in the left navigation. Find the `Chat Playground`. -3. Feel free to keep the default model instructions or modify them. -4. Click on `Add your data` and then `+ Add a new data source`. Let's choose `Upload files` from the drop down of different data sources for this exercise. Grab the data provided in your Codespace under the `/data` folder in `ch2_1.5_product_review.txt`. -5. Click next and select your search service and vector index.On the next page, click `Add vector search to this search resource` and choose your AOAI Service connection. Finally, select `Create vector index`. -6. Once complete, you should be able to chat with the data we added earlier. -7. Ask **What are some of the features and functionalities of the Gally Smartwatch?** +### Exploring RAG in Microsoft Foundry +Try RAG in the Microsoft portal with unstructured data. +1. Navigate to [Microsoft Foundry](https://ai.azure.com/) and `Playgrounds` in the left navigation. Find the `Chat Playground`. +2. Feel free to keep the default model instructions or modify them. +3. Click on `Add your data` and then `+ Add a new data source`. Let's choose `Upload files` from the drop down of different data sources for this exercise. Grab the data provided in your Codespace under the `/data` folder in `ch2_1.5_product_review.txt`. +4. Click next and select your search service and vector index.On the next page, click `Add vector search to this search resource` and choose your AOAI Service connection. Finally, select `Create vector index`. +5. Once complete, you should be able to chat with the data we added earlier. +6. Ask **What are some of the features and functionalities of the Gally Smartwatch?** ## Success Criteria To complete this challenge successfully, you should be able to: - Verify that you have extracted text from raw unstructured data using the Azure Document Intelligence API into a more structured format such as JSON -- Verify that you have created an index using Azure Cognitive Search based on the type of data you are dealing with and load data into the index. +- Verify that you have created an index using Azure AI Search based on the type of data you are dealing with and load data into the index. - Demonstrate the use of Iterative Prompt Development to write effective prompts for your AI tasks ## Learning Resources - [Use OpenAI GPT with your Enterprise Data](https://techcommunity.microsoft.com/t5/startups-at-microsoft/use-openai-gpt-with-your-enterprise-data/ba-p/3817141) -- [ChatGPT + Enterprise data with Azure OpenAI and Cognitive Search](https://github.com/Azure-Samples/azure-search-openai-demo) +- [ChatGPT + Enterprise data with Azure OpenAI and AI Search](https://github.com/Azure-Samples/azure-search-openai-demo) - [Build Industry-Specific LLMs Using Retrieval Augmented Generation](https://towardsdatascience.com/build-industry-specific-llms-using-retrieval-augmented-generation-af9e98bb6f68) ## Advanced Challenges (Optional) diff --git a/066-OpenAIFundamentals/Student/Challenge-05.md b/066-OpenAIFundamentals/Student/Challenge-05.md index a6d0ed17ba..0f87d6c39f 100644 --- a/066-OpenAIFundamentals/Student/Challenge-05.md +++ b/066-OpenAIFundamentals/Student/Challenge-05.md @@ -1,4 +1,4 @@ -# Challenge 05 - Trustworthy AI +# Challenge 05 - Responsible AI [< Previous Challenge](./Challenge-04.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-06.md) @@ -27,33 +27,9 @@ This challenge is divided into the following sections: - [5.6 Protected Material Detection](#56-protected-material-detection-preview) - [5.7 Red Teaming](#57-red-teaming) -For each section of this Challenge, you will work in [Azure AI Foundry](https://ai.azure.com). We recommend keeping the student guide and the Azure AI Foundry in two windows side by side as you work. This will also help to validate you have met the success criteria below for this challenge. +For each section of this Challenge, you will work in [Microsoft Foundry](https://ai.azure.com). We recommend keeping the student guide and Microsoft Foundry in two windows side by side as you work. This will also help to validate you have met the success criteria below for this challenge. -**NOTE:** Previously, each of the Content Safety services were hosted in their own portals. As of July 2024, they have been integrated into Azure AI Foundry. While searching for documentation of these services, you may find references to their original stand-alone portals. You should access these services via Azure AI Foundry for this hack. - -### Access Content Safety Service - TEMPORARY STEP - -Azure AI Services are constantly changing. As of July 2024, the Azure AI Foundry does not automatically grant your user access to the Content Safety service. You will need to perform this task manually. We are adding these detailed steps here to complete this challenge today. We anticipate these steps will not be required in the near future when Azure AI Foundry should handle this automatically. - -Follow these steps to grant your user account access to the Content Safety service: - -- In the [Azure Portal](https://portal.azure.com), navigate to the resource group where your AI resources are deployed -- Navigate to the **Azure AI services** resource -- Click **Access control (IAM)** from the left menu -- Click the **+ ADD** button, then select **Add role assignment** -- On the **Add role assignment** screen, type "Cognitive Services User" in the search box -- In the list of roles, click/highlight the **Cognitive Services User** row -- Click the **NEXT** button at the bottom of the screen -- Click **+ Select Members** -- In the "Select Members" pane that appears, select YOUR user account from the list of users. (This should be **`ODL_User_XXXXXX@azureholXXXX.onmicrosoft.com`**) -- Click the **SELECT** button -- Click the **Review & Assign** button to complete the role assignment - -After the role assignment completes in the Azure Portal, you will need to wait 1-3 minutes and then follow one additional step: - -- Log out of [Azure AI Foundry](https://ai.azure.com), and then log back in. This will ensure your login token is refreshed with the new permissions for Content Safety. - -You should now be prepared to complete the rest of this challenge! +**NOTE:** Previously, each of the Content Safety services were hosted in their own portals. As of July 2024, they have been integrated into Microsoft Foundry. While searching for documentation of these services, you may find references to their original stand-alone portals. You should access these services via Microsoft Foundry for this hack. ### 5.1 Harmful Content Detection @@ -63,7 +39,7 @@ Your Azure AI Services resource includes Content Safety. You may refer to this [ 1. [Understand harm categories](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/harm-categories?tabs=warning) defined by Microsoft. -2. In the [AI Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. +2. In the [Microsoft Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. 3. Try out the following features in Content Safety using provided sample text and data, or come up with your own examples. Analyze the moderation results. Try viewing the code! @@ -90,7 +66,7 @@ Let's configure a content filtering system both for user input (prompts) and LLM #### Student Task 5.2: Create a Custom Content Filter -1. Configure a content filter following these [instructions for the Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/content-filtering#create-a-content-filter). Select the AI project in your AI Hub that contains any model deployments you made in the previous Challenges. Design a content filter that could hypothetically apply to an internal or external tool in your workplace. Or get creative and come up with a scenario that could use a filter, such as an online school forum. +1. Configure a content filter following these [instructions for Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/content-filtering#create-a-content-filter). Select the AI project in your AI Hub that contains any model deployments you made in the previous Challenges. Design a content filter that could hypothetically apply to an internal or external tool in your workplace. Or get creative and come up with a scenario that could use a filter, such as an online school forum. 2. In the "Input Filter" step, configure the four content categories. Keep "Prompt shields for jailbreak attacks" and "Prompt shields for indirect attacks" toggled to "Off" (default) for now. @@ -121,7 +97,7 @@ Learn more about PII in the [documentation](https://learn.microsoft.com/en-us/az #### Student Task 5.3: PII Detection, Redaction, and Extraction -1. In the [AI Foundry](https://ai.azure.com/), navigate to your Project and the "AI Services" in the navigation pane. From here, you should find the option to try out "Language + Translator" capabilities. +1. In the [Microsoft Foundry](https://ai.azure.com/), navigate to your Project and the "AI Services" in the navigation pane. From here, you should find the option to try out "Language + Translator" capabilities. * How do you enable redacting PII? How does the output hide those entities? @@ -144,7 +120,7 @@ Any application system that relies on data to provide answers should be mindful Learn what Ungroundedness and Groundedness are, as well as how [Groundedness Detection](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/groundedness) on Azure works, via the [Microsoft Technical Blog](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/detect-and-mitigate-ungrounded-model-outputs/ba-p/4099261). #### Student Task 5.4.1 -In the [AI Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. +In the [Microsoft Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. Try out the following features in Content Safety using provided sample text and data, or come up with your own examples. Analyze the results. Try viewing the code! @@ -157,7 +133,7 @@ Protecting your LLM application from bad actors is equally important as moderati Attacks can occur through user prompts as well as documents that contain hidden embedded instructions to gain unauthorized control over the LLM session. Read more about [subtypes of user prompt attacks](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/jailbreak-detection). These are considered "input attacks." #### Student Task 5.5.1 -In the [AI Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. +In the [Microsoft Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. Try out the following features in Content Safety using provided sample text and data, or come up with your own examples. Analyze the results. What is the difference between a direct and indirect attack? @@ -165,13 +141,13 @@ Try out the following features in Content Safety using provided sample text and #### Student Task 5.5.2: Update and test your custom content filter using Prompt Shields -1. Revisit the custom content filter you created earlier in the Azure AI Foundry. +1. Revisit the custom content filter you created earlier in Microsoft Foundry. 2. In the **Input filter** tab, toggle the setting for **Prompt shields for jailbreak attacks** and **Prompt shields for indirect attacks** to either **Annotate only** or **Annotate and block**. Keep in mind, for future implementations, that **Annotate and block** can reduce token usage compared to **Annotate only**, which will still return the completed output. 3. Apply the updated filter to one of your deployed models. -4. In the "Chat" tab of the Playground in the Azure AI Foundry, experiment with your updated content filter. +4. In the "Chat" tab of the Playground in Microsoft Foundry, experiment with your updated content filter. Here are some example jailbreak attacks to prompt your protected model. Copy and paste the following prompts to evaluate the LLM's filtered responses: @@ -188,17 +164,17 @@ Here are some example jailbreak attacks to prompt your protected model. Copy and ### 5.6 Protected Material Detection Preview #### Student Task 5.6.1 -In the [AI Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. +In [Microsoft Foundry](https://ai.azure.com/), navigate to your Project and the **AI Services** pane. From here, you should find the option to try out Content Safety capabilities. Try out the following features in Content Safety using provided sample text and data, or come up with your own examples. Analyze the results. * "Protected material detection for text" or "Protected material detection for code" #### Student Task #5.6.2: Update and test your custom content filter using Protected Material Detection -1. Revisit the custom content filter you created earlier in the Azure AI Foundry. +1. Revisit the custom content filter you created earlier in Microsoft Foundry. 2. In the "Output filter" tab, toggle the setting for "Protected material for text" to either "Annotate only" or "Annotate and block." Keep in mind, for future implementations, that "Annotate and block" can reduce token usage compared to "Annotate only," which will still return the completed output. 3. Apply the updated filter to one of your deployed models. -4. In the "Chat" tab of the Playground in the Azure AI Foundry, experiment with your updated content filter. +4. In the "Chat" tab of the Playground in Microsoft Foundry, experiment with your updated content filter. Here is a sample prompt for testing purposes: `to everyone, the best things in life are free. the stars belong to everyone, they gleam there for you and me. the flowers in spring, the robins that sing, the sunbeams that shine, they\'re yours, they\'re mine. and love can come to everyone, the best things in life are` @@ -227,9 +203,9 @@ To complete this challenge successfully, you should be able to: - Identify tools available to identify and mitigate harms in LLMs ## Conclusion -In this Challenge, you explored principles and practical tools to implement Responsible AI with an LLM system through the Azure AI Foundry. Understanding how to apply Responsible AI principles is essential for maintaining user trust and integrity within AI-driven platforms. +In this Challenge, you explored principles and practical tools to implement Responsible AI with an LLM system through Microsoft Foundry. Understanding how to apply Responsible AI principles is essential for maintaining user trust and integrity within AI-driven platforms. -Throughout this Challenge, you have explored the importance of detecting and managing harmful content, as well as the necessity of personally identifiable information (PII) detection and redaction in generative AI applications. By engaging with Azure AI tools in the AI Foundry, you have gained practical experience in moderating content, filtering out undesirable material, and protecting sensitive data. +Throughout this Challenge, you have explored the importance of detecting and managing harmful content, as well as the necessity of personally identifiable information (PII) detection and redaction in generative AI applications. By engaging with Azure AI tools in Microsoft Foundry, you have gained practical experience in moderating content, filtering out undesirable material, and protecting sensitive data. As you move forward, remember the significance of grounding responses in accurate data to prevent the propagation of misinformation and safeguard against input attacks. There are many ways to mitigate harms, and securing your application responsibly is an ongoing endeavor. We encourage you to continuously strive to enhance the safety and reliability of your AI systems, keeping in mind the evolving landscape of digital content safety. @@ -249,5 +225,5 @@ As you move forward, remember the significance of grounding responses in accurat - [New Updates in AI Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/whats-new) - [eBook](https://aka.ms/contentsafetyebook) - [Infuse Responsible AI tools and practices in your LLMOps Microsoft Azure Blog](https://azure.microsoft.com/en-us/blog/infuse-responsible-ai-tools-and-practices-in-your-llmops/) -- [Introducing AI Red Teaming Agent: Accelerate your AI safety and security journey with Azure AI Foundry](https://devblogs.microsoft.com/foundry/ai-red-teaming-agent-preview/) +- [Introducing AI Red Teaming Agent: Accelerate your AI safety and security journey with Microsoft Foundry](https://devblogs.microsoft.com/foundry/ai-red-teaming-agent-preview/) diff --git a/066-OpenAIFundamentals/Student/Challenge-06.md b/066-OpenAIFundamentals/Student/Challenge-06.md index 97aef595e7..9303e29f40 100644 --- a/066-OpenAIFundamentals/Student/Challenge-06.md +++ b/066-OpenAIFundamentals/Student/Challenge-06.md @@ -2,48 +2,59 @@ [< Previous Challenge](./Challenge-05.md) - **[Home](../README.md)** +## Pre-requisites + +- Python 3.10+ installed +- Access to a Microsoft Foundry project with a deployed model (e.g., `gpt-4o`) + ## Introduction -Integrating agents into an application after implementing Retrieval-Augmented Generation (RAG) can significantly enhance user experience by providing personalized interactions and automating repetitive tasks. Additionally, agents can improve decision-making, ensure scalability, and offer real-time responses, making them ideal for complex task management and continuous improvement. In this challenge, you will learn how to use the Azure AI Agent service to build, deploy, and scale enterprise-grade AI agents. +Integrating agents into an application after implementing Retrieval-Augmented Generation (RAG) can significantly enhance user experience by providing personalized interactions and automating repetitive tasks. Additionally, agents can improve decision-making, ensure scalability, and offer real-time responses, making them ideal for complex task management and continuous improvement. + +In this challenge, you will build a **Research Assistant Agent** using the Microsoft Agent Framework. This agent will leverage **Model Context Protocol (MCP)** to connect to live data sources like Microsoft Learn documentation, enabling it to provide accurate, up-to-date answers to technical questions. ## Description -In this challenge, you will create a basic agent. +In this challenge, you will create a code-based agent that can query real-time documentation using MCP tools. -### Setup +You will run the following Jupyter notebook to complete the tasks for this challenge: +- `CH-06-AgenticAI.ipynb` -1. Log into your [AI Foundry portal](ai.azure.com) -2. In your project's left-hand pane, navigate to `My assets -> Models and endpoints`. -3. On the Model deployments tab, click the `+ Deploy model` button and select `Deploy base model` from the drop down. -4. Search for the gpt-4o-mini model, select it, and confirm the deployment. +The file can be found in your Codespace under the `/notebooks` folder. +If you are working locally or in the Cloud, you can find it in the `/notebooks` folder of `Resources.zip` file. -### Creating the Agent -1. In the left-hand pane, under `Build & Customize`, select `Agents` -2. Select your Azure OpenAI resource and hit `Let's go`. -3. Select your model deployment and hit `Next`. -4. You should see an agent under the `Agents` tab at the top. If you select it, you can give it a new name. Enter "`FlightAgent`". -5. You can add instructions as well. Within your codespace, you should see a data folder. That contains the text in the file `FlightAgent.md`. Copy the text from here and add it in instructions. -6. Optional: You can also add a Knowledge Base and Actions to enhance the agent's capabilities. -7. At the top of the agent's `Setup` pane, select `Try in playground`. -8. Here you can interact with your agent in the `Playground` by entering queries in the chat window. For instance, ask the agent to `search for queries from Seattle to New York on the 28th`. Note: The agent may not provide completely accurate responses as it doesn't use real-time data in this example. The purpose is to test its ability to understand and respond to queries. -### Clean-Up -1. Remember to delete your resource group in the Azure portal once you have completed all of the challenges. +The notebook covers the following areas: +- Setting up your environment and installing the Microsoft Agent Framework +- Creating the Research Assistant agent with MCP integration +- Testing single queries and multi-turn conversations +- Exploring how to extend the agent with custom tools +Test your agent with questions like: +- "What is Azure Kubernetes Service and when should I use it?" +- "How do I set up managed identity for Azure Functions?" +- "What are the best practices for Azure OpenAI prompt engineering?" ## Success Criteria To complete this challenge successfully, you should be able to: -- Articulate what an agent is and why it can be used -- Identify tools available to extend an agents capabilities +- Demonstrate your understanding of what an agent is and how tools extend its capabilities +- Verify that your agent is created using the Microsoft Agent Framework in Python +- Verify that MCP tools are integrated to connect your agent to live data sources +- Demonstrate a multi-turn conversation with your Research Assistant + +## Tips + +As you continue developing AI applications, consider how agents can be composed together—what coordination patterns would you use for multi-agent systems handling complex research or analysis tasks? -## Conclusion -In this Challenge, you explored creating an AI Agent through the Azure AI Foundry portal. This developer friendly experience integrates with several tools, knowledge connections, and systems. As you start or continue to develop your AI applications, think about the coordination needed between different agents and their roles. What would be some important considerations with multi-agent systems when handling complex tasks? +**Clean-Up:** Remember to delete your resource group in the Azure portal once you have completed all of the challenges. ## Learning Resources -- [Overview of Azure AI Agents](https://learn.microsoft.com/en-us/azure/ai-services/agents/?view=azure-python-preview) -- These steps are listed here along with many other prompts: [AI Agents in AI Foundry](https://techcommunity.microsoft.com/blog/educatordeveloperblog/step-by-step-tutorial-building-an-ai-agent-using-azure-ai-foundry/4386122) . +- [Microsoft Agent Framework on GitHub](https://github.com/microsoft/agent-framework) +- [Overview of Microsoft Agents](https://learn.microsoft.com/en-us/azure/ai-services/agents/) +- [Model Context Protocol (MCP) Overview](https://modelcontextprotocol.io/) +- [Microsoft Learn MCP Integration](https://learn.microsoft.com/en-us/mcp) diff --git a/066-OpenAIFundamentals/Student/Resources/infra/deploy.sh b/066-OpenAIFundamentals/Student/Resources/infra/deploy.sh index 6e851de2c5..da90473dff 100755 --- a/066-OpenAIFundamentals/Student/Resources/infra/deploy.sh +++ b/066-OpenAIFundamentals/Student/Resources/infra/deploy.sh @@ -7,8 +7,8 @@ source ./functions.sh declare -A variables=( [template]="main.bicep" [parameters]="main.bicepparam" - [resourceGroupName]="rg-ai-foundry-secure" - [location]="eastus" + [resourceGroupName]="rg-openai-fundamentals" + [location]="westus" [validateTemplate]=0 [useWhatIf]=0 ) @@ -82,6 +82,8 @@ if [[ $validateTemplate == 1 ]]; then fi # Deploy the Bicep template +# Get deployment name from template filename (without path and extension) +deploymentName=$(basename "$template" .bicep) echo "Deploying [$template] Bicep template..." deploymentOutputs=$(az deployment group create \ --resource-group $resourceGroupName \ @@ -90,14 +92,15 @@ deploymentOutputs=$(az deployment group create \ --parameters $parameters \ --parameters location=$location \ --parameters userObjectId=$userObjectId \ - --query 'properties.outputs' -o json) + --query 'properties.outputs' -o json 2>/dev/null | grep -A 9999 '^{') - #echo $deploymentOutputs if [[ $? == 0 ]]; then echo "[$template] Bicep template deployment succeeded" else echo "Failed to deploy [$template] Bicep template" - exit + echo "Fetching deployment error details..." + az deployment group show --resource-group "$resourceGroupName" --name "$deploymentName" --query 'properties.error' -o json + exit 1 fi json=$deploymentOutputs @@ -113,7 +116,7 @@ environment_sample_file="../.env.sample" # check if the .env file already exists and back it up if it does if [[ -f "$environment_file" ]]; then - random_chars=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 5) + random_chars=$(LC_ALL=C tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 5) mv "$environment_file" "${environment_file}-${random_chars}.bak" echo -e "\e[33mWarning: Existing .env file found. Backed up to ${environment_file}-${random_chars}.bak\e[0m" else @@ -127,15 +130,10 @@ source $environment_sample_file # Extract values from JSON and write to .env file with double quotes around values echo "Populating .env file..." -echo "OPENAI_API_KEY=\"$(echo "$json" | jq -r '.deploymentInfo.value.aiServicesKey')\"" >> $environment_file echo "OPENAI_API_BASE=\"$(echo "$json" | jq -r '.deploymentInfo.value.aiServicesOpenAiEndpoint')\"" >> $environment_file -echo "AZURE_AI_SEARCH_KEY=\"$(echo "$json" | jq -r '.deploymentInfo.value.searchKey')\"" >> $environment_file echo "AZURE_AI_SEARCH_ENDPOINT=\"$(echo "$json" | jq -r '.deploymentInfo.value.searchEndpoint')\"" >> $environment_file echo "DOCUMENT_INTELLIGENCE_ENDPOINT=\"$(echo "$json" | jq -r '.deploymentInfo.value.documentEndpoint')\"" >> $environment_file -echo "DOCUMENT_INTELLIGENCE_KEY=\"$(echo "$json" | jq -r '.deploymentInfo.value.documentKey')\"" >> $environment_file -echo "AZURE_BLOB_STORAGE_ACCOUNT_NAME=\"$(echo "$json" | jq -r '.deploymentInfo.value.storageAccountName')\"" >> $environment_file -echo "AZURE_BLOB_STORAGE_KEY=\"$(echo "$json" | jq -r '.deploymentInfo.value.storageAccountKey')\"" >> $environment_file -echo "AZURE_BLOB_STORAGE_CONNECTION_STRING=\"$(echo "$json" | jq -r '.deploymentInfo.value.storageAccountConnectionString')\"" >> $environment_file +echo "AZURE_AI_PROJECT_ENDPOINT=\"$(echo "$json" | jq -r '.deploymentInfo.value.projectEndpoint')\"" >> $environment_file # Warning: this assumes the first deployed model is the chat model used by the Jupyter notebooks echo "CHAT_MODEL_NAME=\"$(echo "$json" | jq -r '.deploymentInfo.value.deployedModels[0].name')\"" >> $environment_file diff --git a/066-OpenAIFundamentals/Student/Resources/infra/functions.sh b/066-OpenAIFundamentals/Student/Resources/infra/functions.sh index 02ae19a2fe..3ef25acf12 100755 --- a/066-OpenAIFundamentals/Student/Resources/infra/functions.sh +++ b/066-OpenAIFundamentals/Student/Resources/infra/functions.sh @@ -17,7 +17,7 @@ function authenticate_to_azure { parse_args() { # $1 - The associative array name containing the argument definitions and default values # $2 - The arguments passed to the script - local -n arg_defs=$1 + local -n arg_defs=$1 # this won't work by default on the Mac zsh shell, but works in bash. brew install bash and then /opt/homebrew/bin/bash ./deploy.sh to use it. shift local args=("$@") diff --git a/066-OpenAIFundamentals/Student/Resources/infra/main.bicep b/066-OpenAIFundamentals/Student/Resources/infra/main.bicep index 3f60ee1fe1..8bc0349635 100644 --- a/066-OpenAIFundamentals/Student/Resources/infra/main.bicep +++ b/066-OpenAIFundamentals/Student/Resources/infra/main.bicep @@ -11,49 +11,13 @@ param location string = resourceGroup().location @description('Specifies the name of the Network Security Perimeter.') param nspName string = '' -@description('Specifies the name Azure AI Hub workspace.') -param hubName string = '' - -@description('Specifies the friendly name of the Azure AI Hub workspace.') -param hubFriendlyName string = 'Demo AI Hub' - -@description('Specifies the description for the Azure AI Hub workspace displayed in Azure AI Foundry.') -param hubDescription string = 'This is a demo hub for use in Azure AI Foundry.' - -@description('Specifies the Isolation mode for the managed network of the Azure AI Hub workspace.') -@allowed([ - 'AllowInternetOutbound' - 'AllowOnlyApprovedOutbound' - 'Disabled' -]) -param hubIsolationMode string = 'Disabled' - -@description('Specifies the public network access for the Azure AI Hub workspace.') -param hubPublicNetworkAccess string = 'Enabled' - -@description('Specifies the authentication method for the OpenAI Service connection.') -@allowed([ - 'ApiKey' - 'AAD' - 'ManagedIdentity' - 'None' -]) -param connectionAuthType string = 'AAD' - -@description('Determines whether or not to use credentials for the system datastores of the workspace workspaceblobstore and workspacefilestore. The default value is accessKey, in which case, the workspace will create the system datastores with credentials. If set to identity, the workspace will create the system datastores with no credentials.') -@allowed([ - 'identity' - 'accessKey' -]) -param systemDatastoresAuthMode string = 'identity' - -@description('Specifies the name for the Azure AI Foundry Hub Project workspace.') +@description('Specifies the name for the Microsoft Foundry Project.') param projectName string = '' -@description('Specifies the friendly name for the Azure AI Foundry Hub Project workspace.') -param projectFriendlyName string = 'AI Foundry Hub Project' +@description('Specifies the friendly name for the Microsoft Foundry Project.') +param projectFriendlyName string = 'Microsoft Foundry Project' -@description('Specifies the public network access for the Azure AI Project workspace.') +@description('Specifies the public network access for the Microsoft Foundry Project.') param projectPublicNetworkAccess string = 'Enabled' @description('Specifies the name of the Azure Log Analytics resource.') @@ -71,9 +35,6 @@ param logAnalyticsSku string = 'PerNode' @description('Specifies the workspace data retention in days. -1 means Unlimited retention for the Unlimited Sku. 730 days is the maximum allowed for all other Skus.') param logAnalyticsRetentionInDays int = 60 -@description('Specifies the name of the Azure Application Insights resource.') -param applicationInsightsName string = '' - @description('Specifies the name of the Azure AI Services resource.') param aiServicesName string = '' @@ -90,9 +51,6 @@ param aiServicesIdentity object = { @description('Specifies an optional subdomain name used for token-based authentication.') param aiServicesCustomSubDomainName string = '' -@description('Specifies whether disable the local authentication via API key.') -param aiServicesDisableLocalAuth bool = false - @description('Specifies whether or not public endpoint access is allowed for this account..') @allowed([ 'Enabled' @@ -155,23 +113,6 @@ param keyVaultEnableRbacAuthorization bool = true @description('Specifies the soft delete retention in days.') param keyVaultSoftDeleteRetentionInDays int = 7 -@description('Specifies whether creating the Azure Container Registry.') -param acrEnabled bool = false - -@description('Specifies the name of the Azure Container Registry resource.') -param acrName string = '' - -@description('Enable admin user that have push / pull permission to the registry.') -param acrAdminUserEnabled bool = false - -@description('Tier of your Azure Container Registry.') -@allowed([ - 'Basic' - 'Standard' - 'Premium' -]) -param acrSku string = 'Standard' - @description('Specifies the name of the Azure Azure Storage Account resource resource.') param storageAccountName string = '' @@ -247,30 +188,6 @@ module workspace 'modules/logAnalytics.bicep' = { } } -module applicationInsights 'modules/applicationInsights.bicep' = { - name: 'applicationInsights' - params: { - // properties - name: empty(applicationInsightsName) ? toLower('app-insights-${suffix}') : applicationInsightsName - location: location - tags: tags - workspaceId: workspace.outputs.id - } -} - -module containerRegistry 'modules/containerRegistry.bicep' = if (acrEnabled) { - name: 'containerRegistry' - params: { - // properties - name: empty(acrName) ? toLower('acr${suffix}') : acrName - location: location - tags: tags - sku: acrSku - adminUserEnabled: acrAdminUserEnabled - workspaceId: workspace.outputs.id - } -} - module storageAccount 'modules/storageAccount.bicep' = { name: 'storageAccount' params: { @@ -287,7 +204,7 @@ module storageAccount 'modules/storageAccount.bicep' = { networkAclsDefaultAction: storageAccountANetworkAclsDefaultAction supportsHttpsTrafficOnly: storageAccountSupportsHttpsTrafficOnly workspaceId: workspace.outputs.id - + // role assignments userObjectId: userObjectId aiServicesPrincipalId: aiServices.outputs.principalId @@ -306,7 +223,6 @@ module aiServices 'modules/aiServices.bicep' = { customSubDomainName: empty(aiServicesCustomSubDomainName) ? toLower('ai-services-${suffix}') : aiServicesCustomSubDomainName - disableLocalAuth: aiServicesDisableLocalAuth publicNetworkAccess: aiServicesPublicNetworkAccess deployments: openAiDeployments workspaceId: workspace.outputs.id @@ -316,55 +232,6 @@ module aiServices 'modules/aiServices.bicep' = { } } -module hub 'modules/hub.bicep' = { - name: 'hub' - params: { - // workspace organization - name: empty(hubName) ? toLower('hub-${suffix}') : hubName - friendlyName: hubFriendlyName - description_: hubDescription - location: location - tags: tags - - // dependent resources - aiServicesName: aiServices.outputs.name - applicationInsightsId: applicationInsights.outputs.id - containerRegistryId: acrEnabled ? containerRegistry.outputs.id : '' - keyVaultId: keyVault.outputs.id - storageAccountId: storageAccount.outputs.id - connectionAuthType: connectionAuthType - systemDatastoresAuthMode: systemDatastoresAuthMode - - // workspace configuration - publicNetworkAccess: hubPublicNetworkAccess - isolationMode: hubIsolationMode - workspaceId: workspace.outputs.id - - // role assignments - userObjectId: userObjectId - } -} - -module project 'modules/project.bicep' = { - name: 'project' - params: { - // workspace organization - name: empty(projectName) ? toLower('project-${suffix}') : projectName - friendlyName: projectFriendlyName - location: location - tags: tags - - // workspace configuration - publicNetworkAccess: projectPublicNetworkAccess - hubId: hub.outputs.id - workspaceId: workspace.outputs.id - - // role assignments - userObjectId: userObjectId - aiServicesPrincipalId: aiServices.outputs.principalId - } -} - module networkSecurityPerimeter 'modules/networkSecurityPerimeter.bicep' = if (nspEnabled) { name: 'networkSecurityPerimeter' params: { @@ -388,6 +255,7 @@ module document 'modules/document.bicep' = { params: { name: 'document-${suffix}' location: location + customSubDomainName: toLower('document-intelligence-${suffix}') } } @@ -398,16 +266,8 @@ output deploymentInfo object = { aiServicesName: aiServices.outputs.name aiServicesEndpoint: aiServices.outputs.endpoint aiServicesOpenAiEndpoint: aiServices.outputs.openAiEndpoint - aiServicesKey: aiServices.outputs.key1 - hubName: hub.outputs.name - projectName: project.outputs.name - documentKey: document.outputs.key1 documentEndpoint: document.outputs.endpoint - searchKey: search.outputs.primaryKey searchEndpoint: search.outputs.endpoint - storageAccountName: storageAccount.outputs.name - storageAccountId: storageAccount.outputs.id - storageAccountConnectionString: storageAccount.outputs.connectionString - storageAccountKey: storageAccount.outputs.primaryKey deployedModels: aiServices.outputs.deployedModels + projectEndpoint: aiServices.outputs.projectEndpoint } diff --git a/066-OpenAIFundamentals/Student/Resources/infra/main.bicepparam b/066-OpenAIFundamentals/Student/Resources/infra/main.bicepparam index c644af659f..04ba2c2fc7 100644 --- a/066-OpenAIFundamentals/Student/Resources/infra/main.bicepparam +++ b/066-OpenAIFundamentals/Student/Resources/infra/main.bicepparam @@ -3,8 +3,11 @@ using './main.bicep' param userObjectId = '' param keyVaultEnablePurgeProtection = false -param acrEnabled = false param nspEnabled = false +//param aiServicesDisableLocalAuth = false +param storageAccountAllowSharedKeyAccess = true +//param documentDisableLocalAuth = false + //The first model in the list will be the default model for the Jupyter notebooks param openAiDeployments = [ { diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/aiServices.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/aiServices.bicep index 31bd1c25a1..bdeb2f4f51 100644 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/aiServices.bicep +++ b/066-OpenAIFundamentals/Student/Resources/infra/modules/aiServices.bicep @@ -21,9 +21,6 @@ param tags object @description('Specifies an optional subdomain name used for token-based authentication.') param customSubDomainName string = '' -@description('Specifies whether disable the local authentication via API key.') -param disableLocalAuth bool = false - @description('Specifies whether or not public endpoint access is allowed for this account..') @allowed([ 'Enabled' @@ -64,7 +61,7 @@ var aiServicesMetrics = [ ] // Resources -resource aiServices 'Microsoft.CognitiveServices/accounts@2024-04-01-preview' = { +resource aiServices 'Microsoft.CognitiveServices/accounts@2025-09-01' = { name: name location: location sku: sku @@ -73,13 +70,25 @@ resource aiServices 'Microsoft.CognitiveServices/accounts@2024-04-01-preview' = tags: tags properties: { customSubDomainName: customSubDomainName - disableLocalAuth: disableLocalAuth publicNetworkAccess: publicNetworkAccess + allowProjectManagement: true + defaultProject: '${name}-project' + associatedProjects:[ + '${name}-project' + ] + } +} +resource project 'Microsoft.CognitiveServices/accounts/projects@2025-09-01' = { + name: '${aiServices.name}-project' + parent: aiServices + location: location + identity: identity + properties: { + description: 'Default project for the AI Services account.' } } - @batchSize(1) -resource model 'Microsoft.CognitiveServices/accounts/deployments@2023-05-01' = [ +resource model 'Microsoft.CognitiveServices/accounts/deployments@2025-09-01' = [ for deployment in deployments: { name: deployment.model.name parent: aiServices @@ -174,9 +183,8 @@ output name string = aiServices.name output endpoint string = aiServices.properties.endpoint output openAiEndpoint string = aiServices.properties.endpoints['OpenAI Language Model Instance API'] output principalId string = aiServices.identity.principalId -#disable-next-line outputs-should-not-contain-secrets -output key1 string = aiServices.listKeys().key1 +output projectEndpoint string = project.properties.endpoints['AI Foundry API'] // Output the deployed model names output deployedModels array = [for deployment in deployments: { name: deployment.model.name -}] \ No newline at end of file +}] diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/applicationInsights.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/applicationInsights.bicep deleted file mode 100644 index 69cb91a519..0000000000 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/applicationInsights.bicep +++ /dev/null @@ -1,37 +0,0 @@ -// Parameters -@description('Specifies the name of the Azure Application Insights.') -param name string - -@description('Specifies the location.') -param location string = resourceGroup().location - -@description('Specifies the Azure Log Analytics workspace ID.') -param workspaceId string - -@description('Specifies the resource tags.') -param tags object - -// Resources -resource applicationInsights 'Microsoft.Insights/components@2020-02-02' = { - name: name - location: location - tags: tags - kind: 'web' - properties: { - Application_Type: 'web' - DisableIpMasking: false - DisableLocalAuth: false - Flow_Type: 'Bluefield' - ForceCustomerStorageForProfiler: false - ImmediatePurgeDataOn30Days: true - WorkspaceResourceId: workspaceId - IngestionMode: 'LogAnalytics' - publicNetworkAccessForIngestion: 'Enabled' - publicNetworkAccessForQuery: 'Disabled' - Request_Source: 'rest' - } -} - -//Outputs -output id string = applicationInsights.id -output name string = applicationInsights.name diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/containerRegistry.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/containerRegistry.bicep deleted file mode 100644 index 636e8e3c26..0000000000 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/containerRegistry.bicep +++ /dev/null @@ -1,90 +0,0 @@ -// Parameters -@description('Name of your Azure Container Registry') -@minLength(5) -@maxLength(50) -param name string = 'acr${uniqueString(resourceGroup().id)}' - -@description('Enable admin user that have push / pull permission to the registry.') -param adminUserEnabled bool = false - -@description('Whether to allow public network access. Defaults to Enabled.') -@allowed([ - 'Disabled' - 'Enabled' -]) -param publicNetworkAccess string = 'Enabled' - -@description('Tier of your Azure Container Registry.') -@allowed([ - 'Basic' - 'Standard' - 'Premium' -]) -param sku string = 'Premium' - -@description('Specifies the resource id of the Log Analytics workspace.') -param workspaceId string - -@description('Specifies the location.') -param location string = resourceGroup().location - -@description('Specifies the resource tags.') -param tags object - -// Variables -var diagnosticSettingsName = 'diagnosticSettings' -var logCategories = [ - 'ContainerRegistryRepositoryEvents' - 'ContainerRegistryLoginEvents' -] -var metricCategories = [ - 'AllMetrics' -] -var logs = [ - for category in logCategories: { - category: category - enabled: true - retentionPolicy: { - enabled: true - days: 0 - } - } -] -var metrics = [ - for category in metricCategories: { - category: category - enabled: true - retentionPolicy: { - enabled: true - days: 0 - } - } -] - -// Resources -resource containerRegistry 'Microsoft.ContainerRegistry/registries@2021-12-01-preview' = { - name: name - location: location - tags: tags - sku: { - name: sku - } - properties: { - adminUserEnabled: adminUserEnabled - publicNetworkAccess: publicNetworkAccess - } -} - -resource diagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { - name: diagnosticSettingsName - scope: containerRegistry - properties: { - workspaceId: workspaceId - logs: logs - metrics: metrics - } -} - -// Outputs -output id string = containerRegistry.id -output name string = containerRegistry.name diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/document.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/document.bicep index 5b07c6624c..c1284b8b2d 100644 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/document.bicep +++ b/066-OpenAIFundamentals/Student/Resources/infra/modules/document.bicep @@ -4,17 +4,20 @@ param name string @description('Location where the Azure Document Intelligence will be created.') param location string -resource account 'Microsoft.CognitiveServices/accounts@2024-10-01' = { +@description('Custom subdomain name for the Azure Document Intelligence.') +param customSubDomainName string + +resource account 'Microsoft.CognitiveServices/accounts@2025-09-01' = { name: name location: location sku: { name: 'S0' } kind: 'FormRecognizer' - properties: { + properties: { + customSubDomainName: customSubDomainName + } } -#disable-next-line outputs-should-not-contain-secrets -output key1 string = account.listKeys().key1 output endpoint string = account.properties.endpoint diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/hub.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/hub.bicep deleted file mode 100644 index c0cfe4165c..0000000000 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/hub.bicep +++ /dev/null @@ -1,202 +0,0 @@ -// Parameters -@description('Specifies the name') -param name string - -@description('Specifies the location.') -param location string - -@description('Specifies the resource tags.') -param tags object - -@description('The SKU name to use for the AI Foundry Hub Resource') -param skuName string = 'Basic' - -@description('The SKU tier to use for the AI Foundry Hub Resource') -@allowed(['Basic', 'Free', 'Premium', 'Standard']) -param skuTier string = 'Basic' - -@description('Specifies the display name') -param friendlyName string = name - -@description('Specifies the description') -param description_ string - -@description('Specifies the Isolation mode for the managed network of a machine learning workspace.') -@allowed([ - 'AllowInternetOutbound' - 'AllowOnlyApprovedOutbound' - 'Disabled' -]) -param isolationMode string = 'Disabled' - -@description('Specifies the public network access for the machine learning workspace.') -@allowed([ - 'Disabled' - 'Enabled' -]) -param publicNetworkAccess string = 'Enabled' - -@description('Specifies the resource ID of the application insights resource for storing diagnostics logs') -param applicationInsightsId string - -@description('Specifies the resource ID of the container registry resource for storing docker images') -param containerRegistryId string - -@description('Specifies the resource ID of the key vault resource for storing connection strings') -param keyVaultId string - -@description('Specifies the resource ID of the storage account resource for storing experimentation outputs') -param storageAccountId string - -@description('Specifies thename of the Azure AI Services resource') -param aiServicesName string - -@description('Specifies the authentication method for the OpenAI Service connection.') -@allowed([ - 'ApiKey' - 'AAD' - 'ManagedIdentity' - 'None' -]) -param connectionAuthType string = 'AAD' - -@description('Specifies the name for the Azure OpenAI Service connection.') -param aiServicesConnectionName string = '' - -@description('Specifies the resource id of the Log Analytics workspace.') -param workspaceId string - -@description('Specifies the object id of a Miccrosoft Entra ID user. In general, this the object id of the system administrator who deploys the Azure resources.') -param userObjectId string = '' - -@description('Optional. The name of logs that will be streamed.') -@allowed([ - 'ComputeInstanceEvent' -]) -param logsToEnable array = [ - 'ComputeInstanceEvent' -] - -@description('Optional. The name of metrics that will be streamed.') -@allowed([ - 'AllMetrics' -]) -param metricsToEnable array = [ - 'AllMetrics' -] - -@description('Determines whether or not to use credentials for the system datastores of the workspace workspaceblobstore and workspacefilestore. The default value is accessKey, in which case, the workspace will create the system datastores with credentials. If set to identity, the workspace will create the system datastores with no credentials.') -@allowed([ - 'identity' - 'accessKey' -]) -param systemDatastoresAuthMode string = 'identity' - -// Variables -var diagnosticSettingsName = 'diagnosticSettings' -var logs = [ - for log in logsToEnable: { - category: log - enabled: true - retentionPolicy: { - enabled: true - days: 0 - } - } -] - -var metrics = [ - for metric in metricsToEnable: { - category: metric - timeGrain: null - enabled: true - retentionPolicy: { - enabled: true - days: 0 - } - } -] - -// Resources -resource aiServices 'Microsoft.CognitiveServices/accounts@2024-04-01-preview' existing = { - name: aiServicesName -} - -resource hub 'Microsoft.MachineLearningServices/workspaces@2024-04-01-preview' = { - name: name - location: location - tags: tags - sku: { - name: skuName - tier: skuTier - } - kind: 'Hub' - identity: { - type: 'SystemAssigned' - } - properties: { - // organization - friendlyName: friendlyName - description: description_ - managedNetwork: { - isolationMode: isolationMode - } - publicNetworkAccess: publicNetworkAccess - - // dependent resources - keyVault: keyVaultId - storageAccount: storageAccountId - applicationInsights: applicationInsightsId - containerRegistry: containerRegistryId == '' ? null : containerRegistryId - systemDatastoresAuthMode: systemDatastoresAuthMode - } - - resource aiServicesConnection 'connections@2024-01-01-preview' = { - name: !empty(aiServicesConnectionName) ? aiServicesConnectionName : toLower('${aiServices.name}-connection') - properties: { - category: 'AIServices' - target: aiServices.properties.endpoint - authType: connectionAuthType - isSharedToAll: true - metadata: { - ApiType: 'Azure' - ResourceId: aiServices.id - } - credentials: connectionAuthType == 'ApiKey' - ? { - key: aiServices.listKeys().key1 - } - : null - } - } -} - -resource azureMLDataScientistRole 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = { - name: 'f6c7c914-8db3-469d-8ca1-694a8f32e121' - scope: subscription() -} - -// This role assignment grants the user the required permissions to start a Prompt Flow in a compute service within Azure AI Foundry -resource azureMLDataScientistUserRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (!empty(userObjectId)) { - name: guid(hub.id, azureMLDataScientistRole.id, userObjectId) - scope: hub - properties: { - roleDefinitionId: azureMLDataScientistRole.id - principalType: 'User' - principalId: userObjectId - } -} - -resource diagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = { - name: diagnosticSettingsName - scope: hub - properties: { - workspaceId: workspaceId - logs: logs - metrics: metrics - } -} - -// Outputs -output name string = hub.name -output id string = hub.id diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/project.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/project.bicep index ba3fe208cb..0ce1acc760 100644 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/project.bicep +++ b/066-OpenAIFundamentals/Student/Resources/infra/modules/project.bicep @@ -155,7 +155,7 @@ resource aiDeveloperRoleAssignment 'Microsoft.Authorization/roleAssignments@2022 } } -// This role assignment grants the user the required permissions to start a Prompt Flow in a compute service within Azure AI Foundry +// This role assignment grants the user the required permissions to start a Prompt Flow in a compute service within Microsoft Foundry resource azureMLDataScientistUserRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (!empty(userObjectId)) { name: guid(project.id, azureMLDataScientistRole.id, userObjectId) scope: project @@ -166,7 +166,7 @@ resource azureMLDataScientistUserRoleAssignment 'Microsoft.Authorization/roleAss } } -// This role assignment grants the Azure AI Services managed identity the required permissions to start Prompt Flow in a compute service defined in Azure AI Foundry +// This role assignment grants the Azure AI Services managed identity the required permissions to start Prompt Flow in a compute service defined in Microsoft Foundry resource azureMLDataScientistManagedIdentityRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (!empty(aiServicesPrincipalId)) { name: guid(project.id, azureMLDataScientistRole.id, aiServicesPrincipalId) scope: project diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/search.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/search.bicep index 60c837d8fb..e7c3c7c7c4 100644 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/search.bicep +++ b/066-OpenAIFundamentals/Student/Resources/infra/modules/search.bicep @@ -14,9 +14,12 @@ resource search 'Microsoft.Search/searchServices@2023-11-01' = { replicaCount: 1 partitionCount: 1 hostingMode: 'default' + authOptions: { + aadOrApiKey: { + aadAuthFailureMode: 'http401WithBearerChallenge' + } + } } } -#disable-next-line outputs-should-not-contain-secrets -output primaryKey string = search.listAdminKeys().primaryKey output endpoint string = 'https://${name}.search.windows.net' diff --git a/066-OpenAIFundamentals/Student/Resources/infra/modules/storageAccount.bicep b/066-OpenAIFundamentals/Student/Resources/infra/modules/storageAccount.bicep index 85c59f88b2..ed31e47894 100644 --- a/066-OpenAIFundamentals/Student/Resources/infra/modules/storageAccount.bicep +++ b/066-OpenAIFundamentals/Student/Resources/infra/modules/storageAccount.bicep @@ -248,8 +248,3 @@ resource blobServiceDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@20 // Outputs output id string = storageAccount.id output name string = storageAccount.name -#disable-next-line outputs-should-not-contain-secrets -output primaryKey string = storageAccount.listKeys().keys[0].value - -#disable-next-line outputs-should-not-contain-secrets -output connectionString string = 'DefaultEndpointsProtocol=https;AccountName=${name};AccountKey=${storageAccount.listKeys().keys[0].value};EndpointSuffix=core.windows.net' diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-01-PromptEngineering.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-01-PromptEngineering.ipynb index ccfa510fbf..e7ba963f42 100644 --- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-01-PromptEngineering.ipynb +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-01-PromptEngineering.ipynb @@ -37,7 +37,7 @@ }, "source": [ "## 1. Parameter Experimentation\n", - "Let's first set up the Challenge. Load the API key and relevant Python libraries using the cells below." + "Let's first set up the Challenge. These cells install the required Python packages, load the environment variables, and relevant Python libraries using the cells below." ] }, { @@ -90,12 +90,12 @@ "metadata": {}, "outputs": [], "source": [ - "%pip install -r ../requirements-old.txt" + "%pip install -r ../requirements.txt" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": { "gather": { "logged": 1686932813309 @@ -115,8 +115,11 @@ "import openai\n", "import os\n", "import json\n", + "\n", "from dotenv import load_dotenv, find_dotenv\n", - "load_dotenv(find_dotenv())" + "load_dotenv(find_dotenv())\n", + "from openai import AzureOpenAI\n", + "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n" ] }, { @@ -138,9 +141,10 @@ }, "outputs": [], "source": [ - "API_KEY = os.getenv(\"OPENAI_API_KEY\")\n", - "assert API_KEY, \"ERROR: Azure OpenAI Key is missing\"\n", - "openai.api_key = API_KEY\n", + "token_provider = get_bearer_token_provider(\n", + " DefaultAzureCredential(),\n", + " \"https://cognitiveservices.azure.com/.default\"\n", + ")\n", "\n", "RESOURCE_ENDPOINT = os.getenv(\"OPENAI_API_BASE\",\"\").strip()\n", "assert RESOURCE_ENDPOINT, \"ERROR: Azure OpenAI Endpoint is missing\"\n", @@ -149,8 +153,14 @@ "openai.api_base = RESOURCE_ENDPOINT\n", "openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n", "openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n", + "openai.azure_ad_token_provider = token_provider\n", + "chat_model=os.getenv(\"CHAT_MODEL_NAME\")\n", "\n", - "chat_model=os.getenv(\"CHAT_MODEL_NAME\")\n" + "client = AzureOpenAI(\n", + " azure_endpoint=RESOURCE_ENDPOINT,\n", + " azure_ad_token_provider=token_provider,\n", + " api_version=os.getenv(\"OPENAI_API_VERSION\")\n", + ")\n" ] }, { @@ -215,7 +225,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": { "gather": { "logged": 1686938673045 @@ -234,19 +244,19 @@ "source": [ "def get_chat_completion(prompt, model=chat_model):\n", " messages = [{\"role\": \"user\", \"content\": prompt}]\n", - " response = openai.ChatCompletion.create(\n", - " engine=model,\n", + " response = client.chat.completions.create(\n", + " model=chat_model,\n", " messages=messages,\n", " temperature=0, # this is the degree of randomness of the model's output\n", " max_tokens = 200,\n", " top_p = 1.0\n", " )\n", - " return response.choices[0].message[\"content\"]" + " return response.choices[0].message.content" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": { "gather": { "logged": 1686938550664 @@ -264,14 +274,13 @@ "outputs": [], "source": [ "def get_completion_from_messages(messages, model=chat_model, temperature=0):\n", - " response = openai.ChatCompletion.create(\n", - " engine=model,\n", + " response = client.chat.completions.create(\n", + " model=chat_model,\n", " messages=messages,\n", " temperature=temperature # this is the degree of randomness of the model's output\n", " )\n", "\n", - " return response.choices[0].message[\"content\"]\n", - "\n" + " return response.choices[0].message.content\n" ] }, { @@ -673,7 +682,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 15, "metadata": { "gather": { "logged": 1685081594233 @@ -742,7 +751,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 16, "metadata": { "gather": { "logged": 1685059771050 @@ -883,7 +892,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 18, "metadata": { "jupyter": { "outputs_hidden": false, @@ -1879,7 +1888,7 @@ "name": "python38-azureml" }, "kernelspec": { - "display_name": "Python 3", + "display_name": ".venv (3.13.11)", "language": "python", "name": "python3" }, @@ -1893,7 +1902,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.13" + "version": "3.13.11" }, "microsoft": { "host": { diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb index 195c83072e..aa7f612044 100644 --- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-A-Grounding.ipynb @@ -35,7 +35,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ @@ -44,7 +44,9 @@ "import json\n", "\n", "from dotenv import load_dotenv, find_dotenv\n", - "load_dotenv(find_dotenv())" + "load_dotenv(find_dotenv())\n", + "from openai import AzureOpenAI\n", + "from azure.identity import DefaultAzureCredential, get_bearer_token_provider" ] }, { @@ -62,19 +64,26 @@ "metadata": {}, "outputs": [], "source": [ - "API_KEY = os.getenv(\"OPENAI_API_KEY\")\n", - "assert API_KEY, \"ERROR: Azure OpenAI Key is missing\"\n", - "openai.api_key = API_KEY\n", + "token_provider = get_bearer_token_provider(\n", + " DefaultAzureCredential(),\n", + " \"https://cognitiveservices.azure.com/.default\"\n", + ")\n", "\n", "RESOURCE_ENDPOINT = os.getenv(\"OPENAI_API_BASE\",\"\").strip()\n", "assert RESOURCE_ENDPOINT, \"ERROR: Azure OpenAI Endpoint is missing\"\n", "assert \"openai.azure.com\" in RESOURCE_ENDPOINT.lower(), \"ERROR: Azure OpenAI Endpoint should be in the form: \\n\\n\\t.openai.azure.com\"\n", - "openai.api_base = RESOURCE_ENDPOINT\n", "\n", + "openai.api_base = RESOURCE_ENDPOINT\n", "openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n", "openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n", - "\n", - "model=os.getenv(\"CHAT_MODEL_NAME\")" + "openai.azure_ad_token_provider = token_provider\n", + "chat_model=os.getenv(\"CHAT_MODEL_NAME\")\n", + "\n", + "client = AzureOpenAI(\n", + " azure_endpoint=RESOURCE_ENDPOINT,\n", + " azure_ad_token_provider=token_provider,\n", + " api_version=os.getenv(\"OPENAI_API_VERSION\")\n", + ")" ] }, { @@ -88,20 +97,20 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ - "def get_chat_completion(prompt, model=model):\n", + "def get_chat_completion(prompt, model=chat_model):\n", " messages = [{\"role\": \"user\", \"content\": prompt}]\n", - " response = openai.ChatCompletion.create(\n", - " engine=model,\n", + " response = client.chat.completions.create(\n", + " model=chat_model,\n", " messages=messages,\n", " temperature=0, # this is the degree of randomness of the model's output\n", " max_tokens = 200,\n", " top_p = 1.0\n", " )\n", - " return response.choices[0].message[\"content\"]" + " return response.choices[0].message.content" ] }, { @@ -130,7 +139,7 @@ "Enter Question Here\n", "\"\"\"\n", "\n", - "model_response = get_chat_completion(prompt, model=model)\n", + "model_response = get_chat_completion(prompt, model=chat_model)\n", "print(f\"Response: {model_response}\\n\")\n" ] }, @@ -156,7 +165,7 @@ "Enter Question Here\n", "\"\"\"\n", "\n", - "model_response = get_chat_completion(prompt, model=model)\n", + "model_response = get_chat_completion(prompt, model=chat_model)\n", "print(f\"Response: {model_response}\\n\")" ] }, @@ -183,7 +192,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": ".venv (3.13.11)", "language": "python", "name": "python3" }, @@ -197,7 +206,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.13" + "version": "3.13.11" }, "orig_nbformat": 4 }, diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb index b5e623f23b..a4deb96601 100644 --- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-B-Chunking.ipynb @@ -44,31 +44,36 @@ "metadata": {}, "outputs": [], "source": [ + "%pip install langchain langchain-text-splitters\n", + "\n", "import openai\n", "import PyPDF3\n", "import os\n", "import json\n", "import tiktoken\n", "import spacy\n", - "from openai.error import InvalidRequestError\n", "\n", "from dotenv import load_dotenv, find_dotenv\n", "load_dotenv(find_dotenv())\n", - "\n", + "from openai import AzureOpenAI\n", + "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n", + "token_provider = get_bearer_token_provider(\n", + " DefaultAzureCredential(),\n", + " \"https://cognitiveservices.azure.com/.default\"\n", + ")\n", "from spacy.lang.en import English \n", "nlp = spacy.load(\"en_core_web_sm\")\n", "\n", "import langchain\n", - "from langchain.text_splitter import RecursiveCharacterTextSplitter" + "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", + "from openai import BadRequestError" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Set up your environment to access your Azure OpenAI keys. Refer to your Azure OpenAI resource in the Azure Portal to retrieve information regarding your Azure OpenAI endpoint and keys. \n", - "\n", - "For security purposes, store your sensitive information in an .env file." + "This cell sets up your Python environment to access your Azure OpenAI endpoint and sets up various openai settings from your .env file. " ] }, { @@ -77,19 +82,26 @@ "metadata": {}, "outputs": [], "source": [ - "# Load your OpenAI credentials\n", - "API_KEY = os.getenv(\"OPENAI_API_KEY\")\n", - "assert API_KEY, \"ERROR: Azure OpenAI Key is missing\"\n", - "openai.api_key = API_KEY\n", + "token_provider = get_bearer_token_provider(\n", + " DefaultAzureCredential(),\n", + " \"https://cognitiveservices.azure.com/.default\"\n", + ")\n", "\n", "RESOURCE_ENDPOINT = os.getenv(\"OPENAI_API_BASE\",\"\").strip()\n", "assert RESOURCE_ENDPOINT, \"ERROR: Azure OpenAI Endpoint is missing\"\n", "assert \"openai.azure.com\" in RESOURCE_ENDPOINT.lower(), \"ERROR: Azure OpenAI Endpoint should be in the form: \\n\\n\\t.openai.azure.com\"\n", - "openai.api_base = RESOURCE_ENDPOINT\n", "\n", + "openai.api_base = RESOURCE_ENDPOINT\n", "openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n", "openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n", - "model=os.getenv(\"CHAT_MODEL_NAME\")\n" + "openai.azure_ad_token_provider = token_provider\n", + "chat_model=os.getenv(\"CHAT_MODEL_NAME\")\n", + "\n", + "client = AzureOpenAI(\n", + " azure_endpoint=RESOURCE_ENDPOINT,\n", + " azure_ad_token_provider=token_provider,\n", + " api_version=os.getenv(\"OPENAI_API_VERSION\")\n", + ")" ] }, { @@ -164,12 +176,13 @@ "outputs": [], "source": [ "document = open(r'Insert PDF file path', 'rb') \n", + "\n", "doc_helper = PyPDF3.PdfFileReader(document)" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 11, "metadata": {}, "outputs": [], "source": [ @@ -194,12 +207,16 @@ "\n", "try:\n", " final_prompt = prompt + q\n", - " response = openai.ChatCompletion.create(engine=model, messages=final_prompt, max_tokens=50)\n", - " answer = response.choices[0].text.strip()\n", + " response = client.chat.completions.create(\n", + " model=chat_model, \n", + " messages=[{\"role\": \"user\", \"content\": final_prompt}], \n", + " max_tokens=50\n", + " )\n", + " answer = response.choices[0].message.content.strip()\n", " print(f\"{q}\\n{answer}\\n\")\n", "\n", - "except InvalidRequestError as e:\n", - " print(e.error)\n", + "except BadRequestError as e:\n", + " print(e)\n", "\n" ] }, @@ -387,7 +404,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": ".venv (3.13.11)", "language": "python", "name": "python3" }, @@ -401,7 +418,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.13" + "version": "3.13.11" }, "orig_nbformat": 4 }, diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb index 9e88ed1da1..11d6e59401 100644 --- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-03-C-Embeddings.ipynb @@ -52,7 +52,6 @@ "source": [ "! pip install num2words\n", "! pip install plotly\n", - "! pip install \"openai==0.28.1\" \n", "! pip install nptyping" ] }, @@ -62,7 +61,6 @@ "metadata": {}, "outputs": [], "source": [ - "import openai\n", "import os\n", "import re \n", "import requests\n", @@ -70,11 +68,36 @@ "from num2words import num2words \n", "import pandas as pd \n", "import numpy as np\n", - "from openai.embeddings_utils import get_embedding, cosine_similarity \n", "import tiktoken\n", "from dotenv import load_dotenv\n", "from tenacity import retry, wait_random_exponential, stop_after_attempt\n", - "load_dotenv() " + "from sklearn.metrics.pairwise import cosine_similarity as sklearn_cosine_similarity\n", + "from openai import AzureOpenAI\n", + "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n", + "\n", + "load_dotenv()\n", + "\n", + "token_provider = get_bearer_token_provider(\n", + " DefaultAzureCredential(),\n", + " \"https://cognitiveservices.azure.com/.default\"\n", + ")\n", + "\n", + "# Initialize the Azure OpenAI client\n", + "client = AzureOpenAI(\n", + " azure_endpoint=os.getenv(\"OPENAI_API_BASE\"),\n", + " azure_ad_token_provider=token_provider,\n", + " api_version=os.getenv(\"OPENAI_API_VERSION\")\n", + ")\n", + "\n", + "# Define helper functions using the OpenAI 1.x API\n", + "@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))\n", + "def get_embedding(text: str, engine: str) -> list:\n", + "\ttext = text.replace(\"\\n\", \" \")\n", + "\tresponse = client.embeddings.create(input=[text], model=engine)\n", + "\treturn response.data[0].embedding\n", + "\n", + "def cosine_similarity(a, b):\n", + "\treturn np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))" ] }, { @@ -88,15 +111,12 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "metadata": {}, "outputs": [], "source": [ - "openai.api_type = os.getenv(\"OPENAI_API_TYPE\")\n", - "openai.api_key = os.environ.get(\"OPENAI_API_KEY\")\n", - "openai.api_base = os.environ.get(\"OPENAI_API_BASE\")\n", - "openai.api_version = os.getenv(\"OPENAI_API_VERSION\")\n", - "embedding_model=os.getenv(\"EMBEDDING_MODEL_NAME\")" + "# Get the embedding model name from environment\n", + "embedding_model = os.getenv(\"EMBEDDING_MODEL_NAME\")" ] }, { @@ -119,7 +139,7 @@ "\n", "input=\"I would like to order a pizza\"\n", "\n", - "# Add code here " + "# Add code here: Create embedding using the helper function\n" ] }, { @@ -127,7 +147,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The openai.Embedding.create() method will take a list of text - here we have a single sentence - and then will return a list containing a single embedding. You can use these embeddings when searching, providing recommendations, classification, and more." + "The client.embeddings.create() method will take a list of text - here we have a single sentence - and then will return a list containing a single embedding. You can use these embeddings when searching, providing recommendations, classification, and more." ] }, { @@ -148,6 +168,7 @@ "outputs": [], "source": [ "df=pd.read_csv(os.path.join(os.getcwd(),r'Enter path here'))\n", + "\n", "df" ] }, @@ -195,7 +216,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 16, "metadata": {}, "outputs": [], "source": [ @@ -234,7 +255,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": ".venv (3.13.11)", "language": "python", "name": "python3" }, @@ -248,7 +269,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.13" + "version": "3.13.11" }, "orig_nbformat": 4 }, diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb index c35a148f3e..9126e9e20b 100644 --- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-A-RAG_for_structured_data.ipynb @@ -106,7 +106,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "69bd738e", "metadata": {}, "outputs": [], @@ -119,9 +119,8 @@ "import pandas as pd\n", "import numpy as np\n", "from sklearn.metrics.pairwise import cosine_similarity\n", - "\n", + "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n", "# Azure Cognitive Search imports\n", - "from azure.core.credentials import AzureKeyCredential\n", "from azure.search.documents.indexes import SearchIndexClient \n", "from azure.search.documents import SearchClient\n", "from azure.search.documents.indexes.models import (\n", @@ -143,7 +142,12 @@ "from semantic_kernel.connectors.ai.open_ai import AzureChatPromptExecutionSettings\n", "\n", "from dotenv import load_dotenv\n", - "load_dotenv()" + "load_dotenv()\n", + "\n", + "token_provider = get_bearer_token_provider(\n", + " DefaultAzureCredential(),\n", + " \"https://cognitiveservices.azure.com/.default\"\n", + ")\n" ] }, { @@ -160,19 +164,19 @@ "# Initialize Semantic Kernel\n", "kernel = sk.Kernel()\n", "\n", - "# Add Azure OpenAI Chat Completion service\n", + "# Add Azure OpenAI Chat Completion service with Entra ID authentication\n", "chat_service = AzureChatCompletion(\n", " deployment_name=chat_model,\n", " endpoint=os.environ['OPENAI_API_BASE'],\n", - " api_key=os.environ['OPENAI_API_KEY']\n", + " ad_token_provider=token_provider\n", ")\n", "kernel.add_service(chat_service)\n", "\n", - "# Add Azure OpenAI Text Embedding service \n", + "# Add Azure OpenAI Text Embedding service with Entra ID authentication\n", "embedding_service = AzureTextEmbedding(\n", " deployment_name=embedding_model,\n", " endpoint=os.environ['OPENAI_API_BASE'],\n", - " api_key=os.environ['OPENAI_API_KEY']\n", + " ad_token_provider=token_provider\n", ")\n", "kernel.add_service(embedding_service)\n", "\n", @@ -206,10 +210,13 @@ "metadata": {}, "outputs": [], "source": [ - "# Create a Cognitive Search Index client\n", + "# Create a Cognitive Search Index client with Entra ID authentication\n", + "from azure.identity import AzureCliCredential\n", + "\n", "service_endpoint = os.getenv(\"AZURE_AI_SEARCH_ENDPOINT\") \n", - "key = os.getenv(\"AZURE_AI_SEARCH_KEY\")\n", - "credential = AzureKeyCredential(key)\n", + "\n", + "# Use AzureCliCredential for local development (more reliable than DefaultAzureCredential)\n", + "credential = AzureCliCredential()\n", "\n", "index_name = \"news-index\"\n", "\n", @@ -322,7 +329,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 25, "id": "32689db7-4337-42d9-b8f9-4cbd9d98a850", "metadata": { "gather": { @@ -819,7 +826,7 @@ "name": "python3" }, "kernelspec": { - "display_name": "Python 3", + "display_name": ".venv (3.13.11)", "language": "python", "name": "python3" }, @@ -833,7 +840,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.13" + "version": "3.13.11" }, "microsoft": { "host": { diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb index e4ca2b4acd..82f33157c6 100644 --- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-04-B-RAG_for_unstructured_data.ipynb @@ -36,7 +36,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ @@ -49,6 +49,7 @@ "from azure.core.credentials import AzureKeyCredential\n", "from azure.search.documents.indexes import SearchIndexClient \n", "from azure.search.documents import SearchClient\n", + "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n", "from azure.search.documents.indexes.models import (\n", " SearchIndex,\n", " SearchField,\n", @@ -66,12 +67,17 @@ "import numpy as np\n", "\n", "from dotenv import load_dotenv\n", - "load_dotenv()" + "load_dotenv()\n", + "\n", + "token_provider = get_bearer_token_provider(\n", + " DefaultAzureCredential(),\n", + " \"https://cognitiveservices.azure.com/.default\"\n", + ")" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -80,10 +86,11 @@ "# Initialize the Azure OpenAI client for the latest version\n", "from openai import AzureOpenAI\n", "\n", + "# Initialize the Azure OpenAI client\n", "client = AzureOpenAI(\n", - " api_key=os.environ['OPENAI_API_KEY'],\n", - " api_version=os.environ['OPENAI_API_VERSION'],\n", - " azure_endpoint=os.environ['OPENAI_API_BASE']\n", + " azure_endpoint=os.getenv(\"OPENAI_API_BASE\"),\n", + " azure_ad_token_provider=token_provider,\n", + " api_version=os.getenv(\"OPENAI_API_VERSION\")\n", ")\n", "\n", "chat_model = os.environ['CHAT_MODEL_NAME']\n", @@ -99,7 +106,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "metadata": {}, "outputs": [], "source": [ @@ -111,18 +118,19 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ - "from azure.core.credentials import AzureKeyCredential\n", "from azure.ai.formrecognizer import DocumentAnalysisClient\n", "\n", - "endpoint = os.environ[\"AZURE_DOC_INTELLIGENCE_ENDPOINT\"]\n", - "key = os.environ[\"AZURE_DOC_INTELLIGENCE_KEY\"]\n", + "endpoint = os.environ[\"DOCUMENT_INTELLIGENCE_ENDPOINT\"]\n", + "\n", + "# Use Entra ID authentication instead of API key\n", + "credential = DefaultAzureCredential()\n", "\n", "document_analysis_client = DocumentAnalysisClient(\n", - " endpoint=endpoint, credential=AzureKeyCredential(key)\n", + " endpoint=endpoint, credential=credential\n", ")" ] }, @@ -138,7 +146,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ @@ -211,7 +219,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "metadata": {}, "outputs": [], "source": [ @@ -263,8 +271,7 @@ "source": [ "# Create an SDK client\n", "service_endpoint = os.getenv(\"AZURE_AI_SEARCH_ENDPOINT\") \n", - "key = os.getenv(\"AZURE_AI_SEARCH_KEY\")\n", - "credential = AzureKeyCredential(key)\n", + "credential = DefaultAzureCredential()\n", "\n", "index_name = \"research-paper-index\"\n", "\n", @@ -338,7 +345,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 21, "metadata": {}, "outputs": [], "source": [ @@ -374,7 +381,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 23, "metadata": {}, "outputs": [], "source": [ @@ -421,7 +428,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 24, "metadata": {}, "outputs": [], "source": [ @@ -488,7 +495,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 29, "metadata": {}, "outputs": [], "source": [ @@ -545,18 +552,11 @@ "answer = query_search(\"what is prompt tuning?\", 10)\n", "print(answer)" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": ".venv (3.13.11)", "language": "python", "name": "python3" }, @@ -570,7 +570,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.13" + "version": "3.13.11" }, "orig_nbformat": 4 }, diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-06-AgenticAI.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-06-AgenticAI.ipynb new file mode 100644 index 0000000000..eefce0803f --- /dev/null +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-06-AgenticAI.ipynb @@ -0,0 +1,411 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "243b51a2", + "metadata": {}, + "source": [ + "# Challenge 06 - Agentic AI\n", + "\n", + "In this notebook, you will build a **Research Assistant Agent** using the Microsoft Agent Framework. This agent leverages **Model Context Protocol (MCP)** to connect to live data sources like Microsoft Learn documentation." + ] + }, + { + "cell_type": "markdown", + "id": "96883d46", + "metadata": {}, + "source": [ + "Quick tip! To view the Table of Contents for this Notebook in VS Code or within Codespaces, take a look at the \"Explorer\" tab, expand the \"Outline\" section." + ] + }, + { + "cell_type": "markdown", + "id": "4a5d9005", + "metadata": {}, + "source": [ + "## 6.1. Setting Up Your Environment\n", + "\n", + "First, install the Microsoft Agent Framework. The `--pre` flag is required while the Agent Framework is in preview." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c0865873", + "metadata": {}, + "outputs": [], + "source": [ + "%pip install agent-framework-azure-ai --pre" + ] + }, + { + "cell_type": "markdown", + "id": "3d6e6900", + "metadata": {}, + "source": [ + "### 6.1.1 Load Environment Variables\n", + "\n", + "Load your Microsoft Foundry project endpoint and model deployment name from the `.env` file.\n", + "\n", + "**NOTE:** These values in your .env file are required to ensure the notebook runs seamlessly. They should already be there if you deployed using the deployment script in Challenge 0.\n", + "* AZURE_AI_PROJECT_ENDPOINT must equal your Microsoft Foundry project endpoint\n", + "* CHAT_MODEL_NAME must equal the deployed model's name (e.g., `gpt-4o`)" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "79e84127", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "\n", + "from dotenv import load_dotenv, find_dotenv\n", + "load_dotenv(find_dotenv())\n", + "\n", + "# Note: We use the async version of DefaultAzureCredential for the Agent Framework\n", + "from azure.identity.aio import DefaultAzureCredential" + ] + }, + { + "cell_type": "markdown", + "id": "2d5eec04", + "metadata": {}, + "source": [ + "## 6.2. Creating the Research Assistant Agent\n", + "\n", + "### 6.2.1 Import Required Libraries\n", + "\n", + "Import the Agent Framework components and Azure Identity for authentication." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "4a994927", + "metadata": {}, + "outputs": [], + "source": [ + "from agent_framework.azure import AzureAIClient\n", + "from agent_framework import MCPStreamableHTTPTool" + ] + }, + { + "cell_type": "markdown", + "id": "b3394b76", + "metadata": {}, + "source": [ + "### 6.2.2 Define the MCP Tool\n", + "\n", + "Create a function that returns the MCP tool configuration for Microsoft Learn documentation. This allows your agent to query live, up-to-date documentation." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "9bd08b8d", + "metadata": {}, + "outputs": [], + "source": [ + "def create_mcp_tools():\n", + " \"\"\"Create MCP tools for the Research Assistant agent.\"\"\"\n", + " return [\n", + " MCPStreamableHTTPTool(\n", + " name=\"Microsoft Learn MCP\",\n", + " description=\"Provides trusted, up-to-date information from Microsoft's official documentation\",\n", + " url=\"https://learn.microsoft.com/api/mcp\",\n", + " )\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "id": "ce749f50", + "metadata": {}, + "source": [ + "### 6.2.3 Define the Agent Instructions\n", + "\n", + "Create the system instructions that define how the Research Assistant should behave." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "b42304d3", + "metadata": {}, + "outputs": [], + "source": [ + "AGENT_INSTRUCTIONS = \"\"\"\n", + "You are a helpful research assistant that specializes in Azure and Microsoft technologies. \n", + "\n", + "Your responsibilities:\n", + "1. Use the Microsoft Learn MCP tool to find accurate, up-to-date documentation when answering questions\n", + "2. Always cite your sources by providing links to the documentation\n", + "3. If you're unsure about something, acknowledge it and suggest where the user might find more information\n", + "4. Provide clear, concise explanations suitable for developers of varying experience levels\n", + "\n", + "When responding:\n", + "- Start with a direct answer to the question\n", + "- Provide relevant code examples when appropriate\n", + "- Include links to official documentation for further reading\n", + "\"\"\"" + ] + }, + { + "cell_type": "markdown", + "id": "8b414374", + "metadata": {}, + "source": [ + "### 6.2.4 Set Up Environment Variables\n", + "\n", + "Load the project endpoint and model deployment from your `.env` file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ea960cc0", + "metadata": {}, + "outputs": [], + "source": [ + "PROJECT_ENDPOINT = os.getenv(\"AZURE_AI_PROJECT_ENDPOINT\", \"\").strip()\n", + "assert PROJECT_ENDPOINT, \"ERROR: AZURE_AI_PROJECT_ENDPOINT is missing\"\n", + "\n", + "MODEL_DEPLOYMENT = os.getenv(\"CHAT_MODEL_NAME\", \"\").strip()\n", + "assert MODEL_DEPLOYMENT, \"ERROR: CHAT_MODEL_NAME is missing\"\n", + "\n", + "print(f\"Project Endpoint: {PROJECT_ENDPOINT}\")\n", + "print(f\"Model Deployment: {MODEL_DEPLOYMENT}\")" + ] + }, + { + "cell_type": "markdown", + "id": "968fcd8f", + "metadata": {}, + "source": [ + "## 6.3. Testing the Research Assistant\n", + "\n", + "### 6.3.1 Single Query Test\n", + "\n", + "Let's test the agent with a single question about Azure services." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eeee9085", + "metadata": {}, + "outputs": [], + "source": [ + "async def ask_agent(question: str):\n", + " \"\"\"Send a single question to the Research Assistant agent.\"\"\"\n", + " async with (\n", + " DefaultAzureCredential() as credential,\n", + " AzureAIClient(\n", + " project_endpoint=PROJECT_ENDPOINT,\n", + " model_deployment_name=MODEL_DEPLOYMENT,\n", + " credential=credential,\n", + " ).as_agent(\n", + " name=\"ResearchAssistant\",\n", + " instructions=AGENT_INSTRUCTIONS,\n", + " tools=create_mcp_tools(),\n", + " ) as agent,\n", + " ):\n", + " print(f\"Question: {question}\\n\")\n", + " print(\"Assistant: \", end=\"\", flush=True)\n", + " \n", + " async for chunk in agent.run_stream(question):\n", + " if chunk.text:\n", + " print(chunk.text, end=\"\", flush=True)\n", + " print(\"\\n\")\n", + "\n", + "# Test with a sample question\n", + "await ask_agent(\"What is Azure Kubernetes Service and when should I use it?\")" + ] + }, + { + "cell_type": "markdown", + "id": "94073cb2", + "metadata": {}, + "source": [ + "### 6.3.2 Multi-Turn Conversation with Thread\n", + "\n", + "One of the powerful features of the Agent Framework is thread persistence, which maintains context across multiple conversation turns." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a8710a29", + "metadata": {}, + "outputs": [], + "source": [ + "async def multi_turn_conversation(questions: list):\n", + " \"\"\"Demonstrate multi-turn conversation with context retention.\"\"\"\n", + " async with (\n", + " DefaultAzureCredential() as credential,\n", + " AzureAIClient(\n", + " project_endpoint=PROJECT_ENDPOINT,\n", + " model_deployment_name=MODEL_DEPLOYMENT,\n", + " credential=credential,\n", + " ).as_agent(\n", + " name=\"ResearchAssistant\",\n", + " instructions=AGENT_INSTRUCTIONS,\n", + " tools=create_mcp_tools(),\n", + " ) as agent,\n", + " ):\n", + " # Create a thread for multi-turn conversation\n", + " thread = agent.get_new_thread()\n", + " \n", + " for i, question in enumerate(questions, 1):\n", + " print(f\"--- Turn {i} ---\")\n", + " print(f\"You: {question}\\n\")\n", + " print(\"Assistant: \", end=\"\", flush=True)\n", + " \n", + " async for chunk in agent.run_stream(question, thread=thread):\n", + " if chunk.text:\n", + " print(chunk.text, end=\"\", flush=True)\n", + " print(\"\\n\")\n", + "\n", + "# Test multi-turn conversation\n", + "questions = [\n", + " \"How do I set up managed identity for an Azure Function?\",\n", + " \"Can you show me a code example for that?\",\n", + " \"What are the security benefits of using managed identity instead of connection strings?\"\n", + "]\n", + "\n", + "await multi_turn_conversation(questions)" + ] + }, + { + "cell_type": "markdown", + "id": "b65362b9", + "metadata": {}, + "source": [ + "## 6.4. Exploring Agent Capabilities\n", + "\n", + "### 6.4.1 Adding Custom Tools\n", + "\n", + "In addition to MCP tools, you can create custom Python functions as tools. Here's an example of adding a simple calculation tool." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "47556caa", + "metadata": {}, + "outputs": [], + "source": [ + "from typing import Annotated\n", + "\n", + "def calculate_azure_storage_cost(\n", + " storage_gb: Annotated[float, \"Amount of storage in GB\"],\n", + " tier: Annotated[str, \"Storage tier: 'hot', 'cool', or 'archive'\"] = \"hot\"\n", + ") -> str:\n", + " \"\"\"Calculate estimated monthly cost for Azure Blob Storage.\"\"\"\n", + " # Simplified pricing (actual prices vary by region)\n", + " prices = {\n", + " \"hot\": 0.0184,\n", + " \"cool\": 0.01,\n", + " \"archive\": 0.00099\n", + " }\n", + " price_per_gb = prices.get(tier.lower(), prices[\"hot\"])\n", + " monthly_cost = storage_gb * price_per_gb\n", + " return f\"Estimated monthly cost for {storage_gb} GB on {tier} tier: ${monthly_cost:.2f}\"\n", + "\n", + "# You can add this tool to your agent like this:\n", + "# tools=[get_mcp_tools()[0], calculate_azure_storage_cost]" + ] + }, + { + "cell_type": "markdown", + "id": "42ab6430", + "metadata": {}, + "source": [ + "### 6.4.2 Try It Yourself!\n", + "\n", + "Use the cell below to ask your own questions to the Research Assistant. Modify the question and run the cell to see the response." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "56a14bd3", + "metadata": {}, + "outputs": [], + "source": [ + "# Try your own question!\n", + "your_question = \"What are the best practices for Azure OpenAI prompt engineering?\"\n", + "\n", + "await ask_agent(your_question)" + ] + }, + { + "cell_type": "markdown", + "id": "5e733c45", + "metadata": {}, + "source": [ + "### 6.4.3 Try this in the new Foundry Portal (optional)\n", + "\n", + "The Microsoft Foundry portal also provides a no-code experience for creating and testing agents. If you'd like to explore the portal-based approach:\n", + "\n", + "1. Navigate to [Microsoft Foundry](https://ai.azure.com) and open your project using the New Foundry portal \n", + "2. Click **Build** in the top right\n", + "3. If you already did the steps above, you should already see a ResearchAssistant and you can click that. Otherwise, create a new agent and give it a name like \"ResearchAssistant\"\n", + "4. Add instructions similar to what we defined in `AGENT_INSTRUCTIONS` above\n", + "5. Under **Tools**, add the Microsoft Learn MCP tool to give your agent access to documentation\n", + "6. Use the **Playground** to test your agent with the same questions you tried in this notebook\n", + "\n", + "Compare the portal experience with the code-first approach you used here. Consider:\n", + "- When would you prefer the portal vs. code?\n", + "- How might you use both together in a development workflow?" + ] + }, + { + "cell_type": "markdown", + "id": "ebf0f473", + "metadata": {}, + "source": [ + "## 6.5. Summary\n", + "\n", + "In this notebook, you learned how to:\n", + "\n", + "1. **Set up the Microsoft Agent Framework** with the `agent-framework-azure-ai` package\n", + "2. **Create MCP tools** to connect your agent to live data sources (Microsoft Learn)\n", + "3. **Build a Research Assistant agent** with custom instructions\n", + "4. **Use thread persistence** for multi-turn conversations\n", + "5. **Extend agents with custom tools** using Python functions\n", + "\n", + "### Next Steps\n", + "\n", + "Consider exploring:\n", + "- Adding more MCP tools (e.g., GitHub, databases)\n", + "- Creating multi-agent systems for complex workflows\n", + "- Implementing agent handoffs for specialized tasks\n", + "- Adding memory and state management for long-running agents" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv (3.13.11)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.13.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb index 0bf538deef..77eecbbdb0 100644 --- a/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb +++ b/066-OpenAIFundamentals/Student/Resources/notebooks/CH-5.7-RedTeaming.ipynb @@ -94,7 +94,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "94bcb00a", "metadata": {}, "outputs": [], @@ -199,7 +199,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": ".venv (3.13.11)", "language": "python", "name": "python3" }, @@ -213,7 +213,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.13" + "version": "3.13.11" } }, "nbformat": 4,