New AI research at University of Sheffield awarded share of £100 million government funding

New research that will build principles for the responsible use of AI across the public, private and third sectors at the University of Sheffield, has been awarded a share of £100 million in government funding.

The New Indian Section, South Kensington Museum. Illustrated London News, May 22, 1880, 501. © Royal Armouries DI 2016–0270, used with permission.
The New Indian Section, South Kensington Museum. Illustrated London News, May 22, 1880, 501. © Royal Armouries DI 2016–0270, used with permission.
  • New funding for research that will define responsible artificial intelligence (AI) across education, policing and the creative industries has been announced today (Tuesday 6 February 2024)
  • The University of Sheffield has been awarded a share of £100 million in government funding that underlines the UK's commitment to leading AI research and its ethical deployment
  • Funding for two University of Sheffield projects comes from the Arts and Humanities Research Council (AHRC) and will define what responsible AI use is across sectors such as education, policing and the creative industries

New research that will build principles for the responsible use of AI across the public, private and third sectors at the University of Sheffield, has been awarded a share of £100 million in government funding.

The Secretary of State for Science, Innovation and Technology, Michelle Donelan has today (Tuesday 6 February 2024) announced funding for new research that will deliver next-generation innovations and insights into the use of artificial intelligence (AI) and underline the UK’s commitment to maintaining a leadership position in AI research and its ethical deployment.

The University of Sheffield will host two projects that will define what responsible AI use is across public and cultural sectors.

Supported with funding from the Arts and Humanities Research Council (AHRC) through the Bridging Responsible AI Divides (BRAID) programme, these projects will produce early-stage research and recommendations to inform future work in this area. They illustrate how the UK is at the forefront of defining responsible AI and exploring how it can be embedded across key sectors.

Dr Joanna Tidy will lead a team based in the University of Sheffield’s Department of Politics and International Relations to investigate the responsible use of AI in the museum and heritage sector, specifically in relation to biases in AI which stem from the colonial history of museum collections. The project is in partnership with the Royal Armouries, the UK's national museum of arms and armour.

Dr Tidy said: “Museums and heritage institutions are increasingly using AI tools such as Machine Learning, Natural Language Processing, and Machine Vision to enhance visitor interaction with their collections.

“However, a well-recognised problem with AI is bias, including how AI algorithms reproduce skewed underlying data. For museums and heritage institutions, a challenge for responsible AI use lies in how underlying biases in museum collections, such as those rooted in colonial origins and histories, are reproduced through AI data processing and outputs.

“It is a crucial time to be defining what the responsible use of AI can and should look like for different settings, and we need to work across academic boundaries and engage with a wide range of applied expertise to explore ways forward.”

Dr. Denis Newman-Griffis will lead a second team from the University of Sheffield’s Information School and Department of Philosophy to work with organisations across public, private, and third sectors to build shared learning, values and principles for responsible AI, enabling best practice development, helping to organise information and supporting decision making. 

Partners on this project include the British Library, Sheffield City Council, the multinational data science consultancy firm Eviden, and the Open Data Institute through the Data as Culture programme.

Dr Newman-Griffis said: “This project will help us learn what ‘responsible artificial intelligence’ really means for teams and organisations dealing with the changing AI landscape today.

“Whether it is in helping to organise and share research and heritage materials, informing data-driven policymaking in local government, or mining troves of data for business insight, using AI responsibly needs a clear understanding of who is involved and what matters to them around AI use. Our research will help map out and new directions for making responsible AI a living, breathing practice and lay the groundwork for other organisations to build their own policies on AI much more easily.”

Prof Laurence Brooks, from the University of Sheffield’s Information School, said: “AI has the potential to be of transformational benefit to the world, and already exists within so many aspects of our lives. But, as with other digital technologies, it also has the potential for ignorant and unfair use. The difference is the choices we make, what we call responsible AI. This project aims to help organisations develop an understanding of responsible AI and contribute to a better world.”

Dr Susan Oman, from the University of Sheffield’s Information School, said: “We’ve a great team of colleagues and partners with a keen interest in the burgeoning responsible AI space. We’ll be working with an artist, as well as these organisations to intervene in questions of what and who responsible AI is for, and how it works in practice.”

Professor Christopher Smith, Executive Chair of the Arts and Humanities Research Council and UKRI International Champion said: “The impact of AI can already be felt in many areas of our lives.  It will transform our jobs and livelihoods, and impact on areas as diverse as education, policing and the creative industries, and much more besides. UKRI’s research will be at the heart of understanding this new world.

“The research which AHRC announced today will provide lasting contributions to the definition and practice of responsible AI, informing the practice and tools that are crucial to ensure this transformative technology provides benefits for all of society.”

In addition to the scoping projects AHRC are confirming a further £7.6 million to fund a second phase of the BRAID programme, extending activities to 2027/28. The next phase will include a new cohort of large-scale demonstrator projects, further rounds of BRAID Fellowships, and new professional AI skills provisions, co-developed with industry and other partners.


Flagship institutes

The University’s four flagship institutes bring together our key strengths to tackle global issues, turning interdisciplinary and translational research into real-world solutions.