AI is developing in leaps and bounds. Sometimes it can be tricky to keep up with all the changes. There are a few new terms on the tech scene that I’ll dive in to below. This blog adds to our ad hoc series keeping you up to date on everything artificial and (sometimes) intelligent.
AI poisoning
AI poisoning has hit the news with a splash and generated lots of discussion. There are different ways of identifying AI poisoning, from performance degradation (forcing it to make mistakes), to targeted (adding malicious information to its training data) and backdoor (exploiting the system, often using false data).
This approach is useful for original images, such as artists and photographers. They can download a programme like Nightshade to manipulate their original images (for free) by changing the colour value of pixels. The colour change is minor and invisible to the human eye. For AI, however, this small change interferes with how it analyses images, as it can only aggregate and can’t actually think about the image.
It is completely legal for artists to do this with their artwork and it offers a degree of protection. For many commentators, this seems like a potential solution to AI stealing their artwork. But, to have any real impact on AI, AI poisoning needs to be done at a huge scale. It may end up being a useful deterrent to artist exploitation that has plagued AI.
So what’s the relevance for teaching and learning contexts at Bristol? One can envision these issues fit into critical discussions in Art History, Marketing and Law where copyright are frequently discussed. This could also be of interest to anyone who generates images within their research, from lab samples to site photographs during fieldwork.
Sleeper Agents
With all the manipulation and attacks on AI, there is a risk of creating AI sleeper agents. A research paper on the topic noting that LLMs (AI large language models) can be trained to be deceptive. This deception can generate code that contains vulnerabilities, alongside other types of deceptions. The research raises questions about machine-learning vulnerabilities and security concerns. It demonstrates the importance of using trusted sources when using LLMs. The pinch of salt on all this is that the research was conducted by ChatGPT’s competitor Claude whose own AI code is closed-source. These issues are most likely of interest to those in Engineering dealing with the details of coding.
Other highlights
Some recent news and items you may have missed:
- A prize-winning author admitted to using AI in her novel. The news may be a useful prompt for students of literature on the future of writing (see MSN).
- Another one for Law, UK judges are permitted to use AI following a six-page guidance by the Courts and Tribunals Judiciary (see LawGazette). Note: the guidance does note there are potential issues of bias.
- Arizona State University have partnered with OpenAI in virtually all aspects of its organisation, including administration, research and teaching (see ASU.edu). Headlines ventures include personalised AI “creative buddy” for students and broad access to ChatGPT for engineering students. Right now, it appears like a scatter-gun approach, but it will be fascinating to see where they head next. See also CNBC news coverage.
- Website owners can now choose to blow AI web crawlers from scraping data off their sites (see The Verge).
- One for politics students is the growing trend for AI-generated political misinformation (see Wired).
- According to MIT, AI won’t replace the human workforce anytime soon, because it’s too expensive (see Bloomberg). Japan may disagree with this given its latest embrace of AI tech (see FT).
- For medicine students, the World Health Organization proposes new guidance for AI (see WHO) and notes AI dangers for poorer countries (see Nature).
- Billionaires at Davos were also discussing AI (see Forbes).
Further reading
- A good write up on AI poisoning with examples and responses https://theconversation.com/data-poisoning-how-artists-are-sabotaging-ai-to-take-revenge-on-image-generators-219335




Leave a Reply to Lloyd FletcherCancel reply