bionic hand and human hand finger pointing
Teaching Stories

Building the Bot 

Copilot Studio is Microsoft’s platform for the development and deployment of custom AI agents. It is relatively accessible for the lay user as it provides a ‘no-code’ option. The agents produced in Copilot Studio utilise the same Large Language Models (LLMs) and information sources as the standard Microsoft Copilot interface but importantly, their function can be supplemented with a wide range of additional knowledge sources and their behaviour to user prompts customised with greater control. In this case study we use the trial version of Copilot Studio.

For our purposes, a custom agent named ‘FAQ assistant’, was created with instructions to answer student queries using the knowledge sources provided or to provide details on how to contact a member of staff. Essentially we want the agent to recognise when it can’t answer a question and escalate to our equivalent of customer support.

The process of creating the agent uses generative AI rather than explicit programming – we tell Copilot Studio how we want the agent to behave using natural language and it generates the required instructions and settings. This method of directing an agent is very accessible and typical of most AI tools, however, unlike explicitly defined programming there is a disconnect between what you tell the agent to do and how it actually evaluates and responds to prompts. 

There is an adage in computer science that a program will do what you tell it to do, not what you want it to do. Here, we are going even further – testing whether an AI agent can interpret what we tell it to do and whether its output is actually what we want.

Initial Prompt

The initial prompt that we provide our agent is the principal method by which we control its behaviour. In AI, getting the prompt right is critical to obtaining the desired outcome and an entire field of prompt engineering has emerged as ‘the process of developing effective instructions to obtain the desired output’. The general idea is to be as detailed and explicit as possible.

Our initial prompt for the agent was:

You will answer queries from university students regarding their coursework for a pharmacology unit. Your answers should be professional, clear and concise. You should answer using information in the provided FAQ documents only. You should not use external or internet sources to answer questions. If the answer isn’t available in the FAQ documents you should ask the students to post a question on the unit directors padlet at <url to message board> or email the student admin team using <email for student admin team>

Copilot Studio uses this prompt to create general instructions and settings for the agent. These can be edited directly later.

Knowledge Sources

We uploaded the guidance documents provided to students directly as the ‘knowledge sources’ that the agent should use to answer queries. These were uploaded in PDF format and included the written brief, a list of frequently asked questions as well as the poster and presentation marking descriptors.

“Copilot Studio can interpret information from discrete documents, such as Microsoft Office files, though it can also be directe to public web addresses and linked to Microsoft Sharepoint sites to find data.

It is crucial to think about data protection, as any information in knowledge sources linked to an AI agent can be recalled by the end-user. Managing Sharepoint/Copilot privileges is quite complex and AI agents are very adept at ‘surfacing’ information, making it particularly risky to link to a Sharepoint site that contains sensitive content. 

Topics

A nice feature of Copilot Studio is that we can influence the flow of conversation using the ‘topics’ function. From the outset we wanted the agent to direct the user to personal tutoring and wellbeing support if they expressed stress or concern in their prompts. We provided a prompt for a topic to respond to queries like ‘scared, worried, overwhelmed’etc. with a specific text with details of support services.

In this way, we bypass any attempt by the agent to evaluate and answer these queries in preference of a static message.

Topics can be used to develop very sophisticated behaviour, allowing the agent to ask questions, use logic and interface with other agents. This development does, however, increase the time needed to build and evaluate.

Making the agent available

To test our chatbot, the FAQ assistant was deployed as a ‘Demo Website’, providing a URL to the chatbot which can be accessed via a web browser. 

The trial version of Copilot studio is limited in how an agent can be deployed and as of September 2025 it is not possible to publish custom agents in Microsoft Sharepoint of Teams sites. Having some sort of end-user authentication is important, however, as without – any person who has a link to the agent can access the information.

View the other posts in this series here.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.