CoSTAR PhD Studentships – Fully Funded (2026 Entry)
Applications are now open for 7 fully funded CoSTAR PhD studentships starting in 2026. We warmly invite applications from prospective PhD students from all backgrounds and lived experiences who are interested in undertaking doctoral research within supportive, world-class research and training environments as part of the AHRC-funded CoSTAR project. Whether you are progressing directly from postgraduate study or returning to academia from industry or professional practice, we encourage you to apply. The closing date is Friday 6 March 2026.
Applications now open for CoSTAR PhDs starting in September/October 2026
CoSTAR National Lab is pleased to offer up to 7 fully-funded PhDs for students hosted across three higher education (HEI) partners within the CoSTAR programme:
• Royal Holloway, University of London – opportunities are open to UK-based students only.
• University of Surrey (Surrey Institute for People-Centred AI) – open to UK and international applicants.
• Abertay University – open to UK and international applicants.
Research environment & industry engagement
As a CoSTAR PhD student, you will work at the intersection of research, creativity, and industry, with access to the CoSTAR National Lab’s creative industries partners, including Pinewood Studios, BT, Disguise, and the National Film & Television School, alongside a wide range of additional collaborators. This offers a unique opportunity to develop research that is intellectually ambitious, creatively driven, and connected to real-world challenges in the creative industries.
We welcome students who are curious, motivated, and excited by research, including those who may not see themselves as a “typical” PhD candidate. You do not need to have a fully formed project at the outset — we value potential, ideas, and a willingness to learn and collaborate, particularly in relation to AI and the creative industries.
Supervision and support
Students will be supported by expert Lead and Co-supervisors at their host institution, alongside a wider network of academic and industry mentors across the CoSTAR partnership. This provides a collaborative, inclusive research culture with opportunities for skills development, interdisciplinary exchange, and professional growth.

The Six CoSTAR Futures
These PhD topic areas align with the wider CoSTAR National Lab research programmes - called ‘Futures’ - addressing challenges set by industry, and helping to build research connections across these areas:
Creative Futures takes the best of sector creativity to enable the application of emergent technologies to current and futures opportunities in screen and performance, allowing storytelling to reach into the world and help shape our understanding of it.
Business Futures focuses on developing our understanding of ‘life-centric’ experiences for customers, adapted to their ever-changing needs and priorities, and allowing customers to co-create value and personalised services.
AI Futures will embed cutting-edge and foundational AI into creative industry pipelines, helping to transform the creation, production, delivery, and personalised experience of media content, providing more intuitive and creative control.
Createch Futures is seeking rich, distributed and connected interactive virtual environments, advance real-time rendering and simulation in virtual production and associated realtime workflows, optimisation and Generative AI.
User Futures applies the understanding of human factors, human cognition, emotion and user preferences to the creation of inclusive, accessible, intuitive and engaging technologies and experiences of lasting value.
Inclusive Futures explores principles of inclusive innovation and social justice in creative technology for marginalised users, seeking universal accessibility through distributed, democratised, and sustainable advanced production tools and methods.
Funding and financial support
All studentships are fully funded for three or more years and include:
• UK tuition fees, paid directly to the host institution
• A tax-free maintenance stipend at the UKRI minimum rate
UKRI fees and stipend rates for 2026/27 have not yet been confirmed. However, for reference, in 2025/26, UK tuition fees were £5,006 and the annual stipend was £20,780 (or £22,780 where London weighting applied).
Students will also have access to an additional funding grant (RTSG) for research and training costs to support the indivual needs of each PhD, which can be requested through their supervisors at their host CoSTAR institution.
Applying
Stage 1: Expression of Interest
Prospective candidates should contact the Lead Supervisor for their chosen PhD opportunity by email and work with the supervisory team to develop a 500-word Expression of Interest, outlining a proposed research project. This may include diagrams, timelines, and references if appropriate. Please submit this Expression of Interest along with a current CV by Friday 6 March 2026. Applications must be complete and submitted on time. Please label files clearly using your surname and the PhD reference ID or keyword (e.g. SURNAME_XR AUDIO) and include this information in all correspondence.
Stage 2: Interviews with shortlisted candidates will be held before Friday 27 March 2026.
Stage 3: Offers will be made by Friday 24 April 2026.
Equality, diversity and inclusion
CoSTAR is committed to fostering an inclusive and welcoming research community. Equity, Diversity and Inclusion is a priority for CoSTAR. As such, we actively welcome applicants from all backgrounds and are especially keen to recruit people from groups currently under-represented in creative technology including women, non-binary and disabled people and ethnic minority/ global majority candidates. We recognise that talent and potential take many forms, and we are committed to supporting all students to thrive.
For more information about CoSTAR and additional guidance on how to apply please see:
Main CoSTAR National Lab page on FindaPhD:
https://www.findaphd.com/phds/program/costar-national-lab-doctoral-programme-fully-funded-2026-entry/?i355p7025
CoSTAR National Lab Substack, post by Claude Heath on the CoSTAR PhD programme, 2025-6: 'Shaping the Future of Creative Innovation' https://substack.com/home/post/p-178879603
CoSTAR National Lab Substack https://substack.com/@costarnationallab
CoSTAR National Lab PhDs https://www.costarnetwork.co.uk/latest/csnl_phd_students
CoSTAR National Lab Doctoral Programme 2026 https://www.royalholloway.ac.uk/research-and-education/research/research-institutes-and-centres/costar-national-lab/costar-national-lab-doctoral-programme/

The PhD Opportunities - the topic areas
To learn more about the PhD opportunities, the supervisors and mentors, and how the topic relates to the remit of CoSTAR to assist growth of the creative industries, please click on the orange arrow at the right of each PhD topic title below. Please use the emails of the Lead Supervisors in each case to reach out, ask questions and send your CV and Expression of Interest when ready and before 6 March 2026.
Click each to open more detail:
The Future of Digital Identity and Ownership
Lead Supervisor: Prof. Mark Lycett Mark.Lycett@rhul.ac.uk
Second Supervisor Dr. Alex Reppel
Futures: Business
Keywords/ID for application docs: Digital Identity
Based at Royal Holloway, University of London
This project aims to explore the future of digital identity in the creative industries, specifically, how people present themselves—and are represented—in immersive digital environments. Building on the notion of ‘self-sovereign identity’ (and related concepts such as ‘verifiable credentials’ and ‘decentralised identifiers’), this work will conceptualise, implement, and test governance models, smart contract designs and AI-assisted workflows to manage digital assets, specifically the rights associated with people’s digital (i.e., audio-visual) likenesses. The research involves working closely with CoSTAR creative industry partners and collaborators for the evaluation and iterative development of the research and models.
Emerging Technologies in Music: Implications for Musicians and Audiences
Lead Supervisor: Dr. Maruša Levstek Marusa.Levstek@rhul.ac.uk
Second Supervisor Prof. Jen Parker-Starbuck
Futures: Users
Keywords/ID for application docs: Music Tech
Based at Royal Holloway, University of London
This PhD investigates use cases and implications of emerging technologies for the music industry, including AI and the advent of digital worlds, especially artists and audiences, and comes at a crucial moment, as technologies and their uses in the creative industries evolve rapidly. The PhD research project can be shaped by the candidate working with the supervisory team, and potential topics include artist-audience interactions and liveness, virtual identities in virtual and hybrid virtual-physical performances, and the role of AI in music, and perceptions of human creativity and AI-generated music and AI-assisted workflows. This project will directly work with CoSTAR creative industry partners and collaborators and will address commercial challenges and opportunities identified by industry. The candidate will also work alongside the CoSTAR National Lab’s BRAID research project (Bridging Responsible AI Divides), helping to critically evaluate demonstrators from that research. This PhD research is ideal for candidates from psychology, HCI, digital humanities, or related fields, interested in mixed-methods, experimental, and participatory research approaches, especially those interested in applying their research to the creative industries.
AI-Assisted Worldbuilding in Advanced Production
Lead Supervisor: Prof. Adam Ganz Adam.Ganz@rhul.ac.uk
Second Supervisor Dr. Claude Heath
Mentor Prof. Peter Richardson
Futures: Creative / Worldbuilding
Keywords/ID for application docs: Worldbuilding
Based at Royal Holloway, University of London
Expressions of interest are welcomed for a practice-based PhD to research the application of worldbuilding in AI-assisted advanced production, with a focus on how to maximise benefit for independent and lower budget film and TV production. The practice-based research should investigate and critically reflect on storytelling and worldbuilding in the creative industries, identity a current problem in screen convergence and show how the practice-based PhD research on worldbuilding can be used to address this. The research should explore creative solutions, using AI-assisted advanced production methods, enhancing and supporting human creative endeavour and preserving creative intent in the film and television industries. The aim of the research is to create original IP and show how this approach to worldbuilding can be used to maximise return on investment in the creative industries. We envisage that researchers will engage with CoSTAR industry partners and utilise the CoSTAR Pinewood Studios research facility, opening in 2026, where the Indie Film Hub is based.
In their expression of interest applicants should state how they plan to develop story and worldbuilding materials, and innovate upon current creative industry practices, using AI-assisted advanced production in an original way both technically and creatively. This should include a draft outline and timeline of their proposed research programme. The research should be supported by theoretical, historical and other material augmenting the work conducted with industry, adding context to the creative practice. We are open to a range of storytelling and worldbuilding approaches and methods, including but not limited to design fictions, storyworld design, performance. Candidates will need to evidence their experience of innovation in storymaking and their capacity to draw insights from their process. Experience of advanced production environments is desirable but not essential.
Venue Digitisation Toolset and Pipeline for Immersive Extended Reality
Lead Supervisor: Prof. Nuno Barreiro Nuno.Barreiro@rhul.ac.uk
Second Supervisor Prof. Carlos Matos at Royal Holloway, University of London
Mentor Dr. Matt Bett at Abertay University
Futures: Standards / Createch
Keywords/ID for application docs: XR Venues
Based at Royal Holloway, University of London
The creative industries are increasingly looking for ways to produce high-fidelity digital representations of real-world spaces for immersive XR experiences, heritage preservation, virtual production and other usages. Yet, digitising complex venues remains costly, time-consuming, and locked within proprietary systems that may exclude smaller studios and independent producers. This research addresses how to capture and reconstruct venues as production-ready 3D assets – efficiently, affordably, and at quality levels suitable for real-time rendering, using spatial capture pipelines combining LIDAR, photogrammetry, and AI-assisted reconstruction (including Gaussian splatting), while evaluating effectiveness across production contexts.
Key research questions include: What approaches to capture of real-world spaces offer optimal trade-offs between speed, cost, and visual fidelity? How can automation reduce timelines while maintaining the geometric and textural accuracy real-time rendering demands? What open-source workflows can reduce vendor lock-in and improve access for creative practitioners?
The project will develop and test these pipelines through case studies of performance and heritage venues with CoSTAR partners from the creative industries, producing empirical comparisons across capture technologies, practical guidelines for XR production workflows, and open-source tools supporting interoperability across Unreal Engine and real-time platforms. Expected contributions will address sector barriers to growth such as reducing levels of expertise and investment required for venue digitisation and will create opportunities by enabling creative reuse of culturally significant spaces and supporting XR production capabilities across the UK creative industries.
Adaptive Intelligence for Hybrid XR Infrastructure
Lead Supervisor: Dr. Laith Al-Jobouri l.al-jobouri@abertay.ac.uk
Second Supervisor Dr. Javad Zarrin
Mentor Prof. Nuno Barreiro at Royal Holloway
Futures: Createch / Standards
Keywords/ID for application docs: Hybrid XR
Based at Abertay University
There is a transformative opportunity for the creative industries to create highly interactive, collaborative, and democratised applications of extended reality (XR) using nascent 5G, edge computing and cloud infrastructure and hybrid compute. This PhD will investigate how AI-driven orchestration can optimise offloading between resource-constrained devices and nearby edge or remote cloud resources. The research will help unlock new creative applications and workflow efficiencies in the sector, through distributed rendering and real-time collaboration across the diverse compute environments that are found in the sector.
By exploring the cutting-edge of adaptive, multi-tier strategies, the project paves the way for scalable, high-quality XR experiences to be delivered to audiences. This research will address the limitations of current systems by intelligently trading latency for fidelity and power efficiency, and supporting the adoption of multi-tier approaches, creating a new paradigm that goes beyond device-only, edge-only, or cloud-only that at present treat resource placement only as a static, one-time decision. The research aims to replace this with a dynamic and intelligent AI-assisted approach that will be trialled with creative sector companies in the UK.
Every Body in 3D: Inclusive Single-View Human Reconstruction
Lead Supervisor: Dr. Marco Volino m.volino@surrey.ac.uk
Futures: AI / Createch / Inclusive
Second Supervisor Dr. Eddy Zhu
Third Supervisor Prof. Adrian Hilton
Mentors: Prof. Angela Chan and Dr. Hazel Dixon at Royal Holloway
Keywords/ID for application docs: Every Body
Based at University of Surrey
This project investigates inclusive single-view 3D human reconstruction, aiming to advance computer vision so it can accurately represent people with diverse body types, limb differences, and mobility aids. Existing reconstruction methods are typically trained on datasets built around “average” or able-bodied anatomies and often rely on strong priors of symmetry and canonical body structure. As a result, they frequently produce biased, incomplete, or anatomically incorrect digital models when applied to individuals whose bodies fall outside these narrow norms. This creates exclusion in applications such as virtual reality, avatar creation, and digital accessibility tools.
To overcome these limitations, the project will build a representative, high-resolution dataset using photogrammetry and motion capture. This dataset will be developed collaboratively with inclusivity specialists based at Royal Holloway, University of London (RHUL) to ensure ethical data collection, respectful representation, and fairness in design. Using this foundation, the research will develop new priors, learning strategies, and reconstruction approaches capable of handling non-standard anatomies, occlusions from assistive devices, and a broader spectrum of human shapes. Ultimately, the research aims to support accessible virtual environments and inclusive avatar generation, contributing to a future where 3D vision technologies can accurately and respectfully represent every possible body.
This research aligns with the CoSTAR programme’s focus on AI-assisted advances in production by developing data, models, and evaluation methods that support scalable and automated creation of high-quality 3D human assets from minimal input. By reducing reliance on restrictive assumptions such as symmetry and standardised anatomy, the project aims to improve robustness and generalisation in real-world production pipelines, working with CoSTAR creative industry partners and collaborators. The work has clear relevance to industry through CoSTAR, particularly in areas such as immersive media, games, XR, virtual production, and accessibility-focused digital tools, where inclusive and efficient digital human reconstruction is increasingly important. By addressing bias and representation at the level of data and model design, the project contributes to responsible AI technologies that can be adopted within creative and commercial production workflows.
AI-Based Text-to-Soundscape Generation for Interactive Immersive Production
Lead Supervisor: Prof. Enzo de Sena e.desena@surrey.ac.uk
Second Supervisor Prof. Philip Jackson
Keywords/ID for application docs: AI Soundscapes
Futures: AI / Createch
Based at University of Surrey
While visual fidelity has advanced rapidly across media production in the creative industries, audio context has lagged. Interactive content production includes, for example, immersive location-based experiences (LBX, including museums, theme-park attractions, escape rooms and mixed-reality installations), as well as VR/XR production and low-cost indie film and live TV production. Sound designers face a challenge: how to deliver reactive, narrative-led soundscapes without the help of large teams of audio engineers or extensive field recordings dedicated to their current project. This project proposes to use AI-based soundscape generation: an automated system that transforms scene descriptions into spatially coherent audio suitable for a variety of interactive applications. Leveraging recent text-to-audio generative models (e.g., AudioLDM2, StableAudio, AudioCraft), the project aims to democratise high-quality ambient sound generation by enabling low-cost, script-driven audio context that can be combined with other forms of designer input to produce responsive spatial sound experiences.
This PhD project proposes a system for automatically generating spatially rich soundscapes from scene descriptions, addressing the high cost of barriers to entry for immersive sound design in interactive entertainment. Building on state-of-the-art text-to-audio models, the research will develop new spatial conditioning methods trained on a curated dataset of multichannel soundscapes linked to script-style descriptions and key narrative elements. The research will synthesise interactive experiences with spatial audio, improving control for the designer and immersion for the user. The research includes dataset development, model training, and evaluation through designer-in-the-loop testing and installations working with creative industry partners and collaborators, with expected contributions including novel datasets, generative spatial-audio models and evidence of their effectiveness in creative industries, interactive immersive production contexts.
The Futures Studio at Royal Holloway, LED screens and camera