Green toned image showing green robot looking at brown bottle with green skull
AI, Designed for All, News

AI buzzwords and developments

AI is developing in leaps and bounds. Sometimes it can be tricky to keep up with all the changes. There are a few new terms on the tech scene that I’ll dive in to below. This blog adds to our ad hoc series keeping you up to date on everything artificial and (sometimes) intelligent.

AI poisoning

AI poisoning has hit the news with a splash and generated lots of discussion. There are different ways of identifying AI poisoning, from performance degradation (forcing it to make mistakes), to targeted (adding malicious information to its training data) and backdoor (exploiting the system, often using false data).

This approach is useful for original images, such as artists and photographers. They can download a programme like Nightshade to manipulate their original images (for free) by changing the colour value of pixels. The colour change is minor and invisible to the human eye. For AI, however, this small change interferes with how it analyses images, as it can only aggregate and can’t actually think about the image.

It is completely legal for artists to do this with their artwork and it offers a degree of protection. For many commentators, this seems like a potential solution to AI stealing their artwork. But, to have any real impact on AI, AI poisoning needs to be done at a huge scale. It may end up being a useful deterrent to artist exploitation that has plagued AI.

So what’s the relevance for teaching and learning contexts at Bristol? One can envision these issues fit into critical discussions in Art History, Marketing and Law where copyright are frequently discussed. This could also be of interest to anyone who generates images within their research, from lab samples to site photographs during fieldwork.

Sleeper Agents

With all the manipulation and attacks on AI, there is a risk of creating AI sleeper agents. A research paper on the topic noting that LLMs (AI large language models) can be trained to be deceptive. This deception can generate code that contains vulnerabilities, alongside other types of deceptions. The research raises questions about machine-learning vulnerabilities and security concerns. It demonstrates the importance of using trusted sources when using LLMs. The pinch of salt on all this is that the research was conducted by ChatGPT’s competitor Claude whose own AI code is closed-source. These issues are most likely of interest to those in Engineering dealing with the details of coding.

Other highlights

Some recent news and items you may have missed:

  • A prize-winning author admitted to using AI in her novel. The news may be a useful prompt for students of literature on the future of writing (see MSN).
  • Another one for Law, UK judges are permitted to use AI following a six-page guidance by the Courts and Tribunals Judiciary (see LawGazette). Note: the guidance does note there are potential issues of bias.
  • Arizona State University have partnered with OpenAI in virtually all aspects of its organisation, including administration, research and teaching (see ASU.edu). Headlines ventures include personalised AI “creative buddy” for students and broad access to ChatGPT for engineering students. Right now, it appears like a scatter-gun approach, but it will be fascinating to see where they head next. See also CNBC news coverage.
  • Website owners can now choose to blow AI web crawlers from scraping data off their sites (see The Verge).
  • One for politics students is the growing trend for AI-generated political misinformation (see Wired).
  • According to MIT, AI won’t replace the human workforce anytime soon, because it’s too expensive (see Bloomberg). Japan may disagree with this given its latest embrace of AI tech (see FT).
  • For medicine students, the World Health Organization proposes new guidance for AI (see WHO) and notes AI dangers for poorer countries (see Nature).
  • Billionaires at Davos were also discussing AI (see Forbes).

Further reading

3 thoughts on “AI buzzwords and developments”

  1. The positive use of poisoning (by image creators) makes we wonder whether academics could do something similar with texts, i.e., to mitigate the effects ot students uploading assignments or cases or other course materials to AI.

    Is there a way to include something hidden in the documents we make available to students that will limit what AI can do with them?

  2. Hi Lloyd and Ash, thanks for your comments.
    I was at a conference last year and there was a discussion of this linked to MCQs.
    Using things like digital keys or video might be a way of creating enough ‘barriers’ to render use of AI time ineffective and mitigate the benefits of its use. However there are real inclusion aspects which need to be centred here.
    And that’s within a particular exclude/ban model as Peter has explored https://bilt.online/exclude-embrace-ban/

    On an anecdotal level, at school a classmate believed that essays were not being marked and were just ticked and flicked. Accordingly he put a few random words into the middle of the essay to see if anything was picked up. So I suppose it’s kind of an analog reversal of the above.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.