Benefits
We have demonstrated that we can quickly create a bespoke AI agent, and that it generally succeeds in providing answers to simple questions about an assessment. It’s accessible in every sense – always available and with a diminished fear imposing on a human or asking silly questions, but it also provides direction to human help (academics and professional services). Pleasingly, the agent was also able to recognise suggestions of distress in prompts and provide information on wellbeing support- something easily implemented in any informational chatbot in higher education regardless of content.
Concerns
Bad responses and negative perception
In our case study, the main concern was the propensity of the agent to either misrepresent the information given or generate information beyond its scope. Surprisingly, this lack of absolute control over what knowledge an agent uses in its answers is not uncommon in chatbot creation and can require some technical skill to overcome.
For most casual users of AI it is a given that we must be critical of the outputs, however, the relationship between a consumer of a service (or even that between the student and HE institutes) is different. Arguably, by presenting custom AI agents, we are tacitly assuming responsibility for the accuracy of information that comes out of them. Inconsistent or inaccurate AI-generated assessment information would certainly frustrate students who have already demonstrated they are anxious about the instructions – this is unlikely to be resolved by telling them to be more critical of the tool we give them. Further our agent could exacerbate inequity; aside from providing incomplete or incorrect guidance, one of our prompts regarding the poster subject provided significantly more academic direction than others.
Student cynicism arises when they feel that their (perhaps idealistic) expectations of higher education are not being met. This can be particularly true in matters of support and if students feel an institute is prioritising its own financial interests. When the adoption of AI is already culturally tied to the displacement of human jobs, these negative associations may be hard to allay and can have reputational consequences.
Increasing Workload – It is quite possible that most of the shortcomings in the agent’s behaviour could be surmounted by a process of finessing the instructions and testing the outcome. Some of the ways we can control our agent’s behaviour are given in the figure below.

This troubleshooting for unintended responses can be time-consuming and may not anticipate all potentially problematic prompts. Ironically, it may be less work and more reliable to answer student queries manually than to develop an AI agent fit for purpose, at least for relatively small numbers of end-users.
Pedagogically unsound – In providing this AI agent we are unmindfully suggesting that students do not need to read the source documents independently which, arguably, reinforces a surface-learning mindset. There are plenty of cases where this use of AI is sensible; e.g. for documentation that exists as a comprehensive reference and the typical user only needs a specific detail. However, when we expect readers to engage with all content, such as for assessment information or for the learning outcomes of a unit, AI instead provides discrete pieces of information directed by the user’s prior assumptions of what is important.
Of course, students could also use their own AI tools to parse documentation we provide them, in which case the responsibility for the accuracy of the responses is theirs. Otherwise we, as educators, can assume that responsibility in return for having greater control over the behaviour and messaging via our own agent. In either case, it highlights a need for our students to develop AI-literacy and be mindful of both the outputs of official AI agents and the use of their own AI tools.
Ultimately, most of the concerns above become irrelevant if we remove the student from the equation (or at least from direct interaction with the AI). Simple agents like this could still be very useful for AI-literate staff to find the relevant information safe in the knowledge that they will sense-check the responses before passing to a student. Reflecting on the development of Holly, Durham University’s AI admissions chatbot, Dr Crispin Bloomfield found this was an unexpected benefit:
“Holly is now being used by our own university staff to access internal answers to questions that we have, because the enhanced search functionality really is transforming the way in which we’re able to serve surface information.” Dr Crispin Bloomfield, former head of Admissions, Durham University
Some general conclusions and recommendations for student-facing information agents
This blog has demonstrated a somewhat naive process of building an AI using Copilot Studio for a niche purpose of assisting students on the requirements of their coursework. The agent we built in less than fifteen minutes was functional but ultimately is not suitable to put in front of students due to the concerns we have discussed. As such, we recommend:
Student-facing AI agents must be rigorously tested for the fidelity of the information they provide. Test prompts should evaluate a range of outcomes (Figure 1) as well as be representative of different ways diverse students might word them.

While the responses given by our agent could certainly be improved using features of Copilot Studio, this iterative process takes much longer for diminishing benefits to automation;
Student-facing AI agents are only feasible for use cases where there are a large number of users with routine enquiries.
Casual deployment of information agents should be avoided.
When an agent’s use does make the development time worthwhile, there are a few features that should be incorporated;
Student-facing agents should recognise concerning prompts and direct to wellbeing support.
Agents should provide a means to contact a real person such that they do not act as a dead end to support.
As a final note, there is a growing body of literature on student perceptions of AI in the context of education and teaching, however there is less data available on how student perceive the use of AI agents for support outside of learning, such as in our use case. As such, there is a need to make the distinction between the use of generative AI for pedagogical activities and the more mundane but none-the-less impactful way it is used to find information on process and support at university.
For those at the University of Bristol interested in discussing potential use cases for agents developed using Copilot Studio, please contact the Digital Education Office. For project development, engage with your IT business partner early.