Our vision
In the Summer of 2025, we held three workshops across Southampton, working with over 50 people who live and work in our city.
Together, we thought about the future of Southampton and what impact AI might have; and put together a vision for Southampton and AI.
We are hopeful that AI will solve difficult problems like traffic congestion, pollution, or health problems. But often AI is seen as magic, and we should also be sceptical of the claims being made.
We need to experiment and share our experiences so that we can all understand what AI can do and also what its limitations are.
We shouldn't rely on AI for important tasks until we're confident it will be safe and reliable. And we shouldn't let the really big challenges distract us from small everyday tasks that AI could help with.
Essential skills, knowledge, and infrastructure are required for Southampton to use and benefit from AI. Without skills development, it is unlikely that AI can be adopted, implemented and sustained.
Individuals need new skills that support them to use AI safely and effectively, and organisations need to develop the capabilities to identify, implement, and manage new AI tools.
Schools, universities and councils should take leading roles in education and skills development. But we can't just educate young people: AI will change many people's jobs, so everyone needs the chance to learn about it and develop new skills.
AI’s integration into the ‘every day’ risks diminishing human interaction and the diversity of human experience. As AI tools increasingly influence our choices and conversations, the nature of human connection is augmented and replaced. This shift in support mechanism leads to a sense of isolation. These changes can negatively impact overall wellbeing, contributing to a rise in mental health challenges.
I see loads of messages like I know people personally and I know how they talk. But now you see all these formal messages and you are like, you don't feel it, you feel like your friends who were just talking normal human language, now it's all AI.
Even where new technology can make interactions between people and organisations more efficient, there should be options to speak to other human beings. We should recognise and protect the value of connecting with other human beings, even just for a few moments.
AI holds transformative potential, but its power currently resides with the individuals and organisations who design, develop, and implement it. As a result, access to AI and its benefits are not distributed equally across society.
If you can utilise it in a way which is beneficial to people, and which isn't going to scare the pants off of anybody and isn't gonna be unfair, then that's great.
We should recognise that the benefit of AI won't be felt by everyone, unless we make an active effort to do so. Organisations implementing AI strategies should consider how benefits can be shared equitably.
Everyone, regardless of background or income, should have access to AI tools and the opportunity to develop AI literacy so that everyone can take part in the dialogue surrounding its implementation.
We want AI to help us build civic accountability by explaining decisions that are made about our city.
But we also want to hold the people who are using AI accountable, particularly in high risk scenarios. We should know who is responsible for the different systems in our city, and be able to question and challenge them.
I think we should be saying 'this has gone wrong, who was responsible?' Not from the point of view of pointing a finger and saying right you're fired, but from the point of education say okay we've got it wrong, who doesn't get things wrong, but with AI it's made it worse. What it is that's made the problem, how are you going to know next time, whether it's the same problem that's come up, and is there the facility to be able to do that.
[Humans] make bad decisions and bad predictions themselves, and they're not held accountable a lot of the time…
The opportunity for AI to make public decision making more open and understandable should be explored.
Organisations deploying AI should consider how AI will be checked, audited, and explained. Ultimate accountability should lie with an individual or organisation. The skills to understand, explore and challenge AI should be fostered - especially in places like scrutiny committees, which already have the remit to challenge.