I’m preparing a Global Talent (Promise) application and would appreciate feedback on how I’m structuring my Optional Criteria.
Brief Background
• Quantitative researcher/trader at a proprietary trading firm – ownership of research infrastructure adopted across the team and production ML systems (all proprietary).
• Academic research – 3 publications/preprints (6.5, 5.5 and 3.5 years ago respectively) (total ~35 citations), with one more likely to be uploaded to arXiv soon.
• Early contributor to a blockchain startup – designed oracle incentive mechanisms; modelling + simulations; mechanisms integrated into live product.
• Great Senior AI Engineering offer from a startup with a UK presence.
Referees
– Executive Director at a bank (former colleague)
– VP-level colleague
– Academic supervisor
Planned Evidence Mapping
Mandatory Criteria (Leadership / Potential)
• Evidence of ownership and leadership in development of production trading systems and research infrastructure.
• Market validation via senior AI Engineering offer.
OC3
• Evidence of ML models deployed in live trading.
• Internal adoption confirmation (senior colleague letter).
• Redacted architecture/design documentation for infrastructure I built.
OC1
• CEO letter from blockchain startup confirming early contribution and deployment of incentive mechanism into live product.
• Redacted modelling/simulation documentation.
• Explanation of mechanism design problem and how my contribution shaped implementation.
Structurally, does MC (current firm leadership/ownership) + OC3 (current firm technical contribution) + OC1 (blockchain innovation) seem coherent for my background, or would replacing OC1 with OC4 (academic research) create a stronger overall narrative?
For proprietary work, what forms of evidence have worked best (redacted documents vs recommender validation vs screenshots)?
How strictly is the “within 5 years” rule applied? I have older public facing achievements (competitions, mentorship, social impact) from ~5-6 years ago.
Would really value any feedback!
Thanks in advance - I’m finding the mapping slightly tricky given that a high percentage of my recent work is proprietary.
Hi @snt, hope you’re doing well. You’ve got some good evidence to work with, but you need to be very clear about your positioning. Researcher on its own isn’t an eligible skill set, it can support an academic leaning narrative, but it’s safer and more coherent to frame yourself as an AI Engineer with research components, rather than leading with quantitative researcher.
On the publications: anything published 5+ years ago is already outside the acceptable window for evidence. The citation count is decent, but because the work is old, it won’t carry as much weight. Also you chose OC1 and OC3 but this evidence rightly align with OC4.
Referees
An Executive Director at a bank is fine, but avoid describing them as a colleague, everyone you work with is technically a colleague, and that framing reduces the perceived seniority and oversight they had over your work. Same issue with the VP. Your academic supervisor may not be ideal either, because academics aren’t automatically considered sector experts unless their work directly aligns with your claimed field.
For your MC and OCs
Evidence of ownership and leadership… and Evidence of ML models deployed… are descriptions, not evidence. What exactly are you planning to submit? Lines of code? Email screenshots? Architecture docs with time stamp, showing ownership? Without specifics, it’s hard to give a meaningful review.
Overall, it’s difficult to assess the strength of your mapping with only high level summaries.
Hi @Raphael , thanks, this is very helpful and I agree clarity of positioning is key.
Yes, I completely agree that I should be positioning myself as an applied AI Engineer with a research/trading bent in the quant space.
On referees, my intention is to use individuals who directly oversaw my work (Executive Director and VP-level supervisor) plus my research supervisor. From your experience, is that the right balance, or do assessors generally prefer more external industry figures even if they had less direct oversight?
On evidence specifics (rather than descriptions), my plan is:
MC
Redacted high-level architecture of production trading systems I designed (including model deployment and risk components), with written confirmation from a letter writer who oversaw this work
Redacted documentation of research / automation infrastructure I built that was adopted across the team, again with confirmation of adoption and scope.
Senior AI offer letter to evidence external market validation, upcoming impact in the UK AI space
OC3
Design document for the volatility forecasting system (problem, constraints, implementation decisions), with supervisor confirmation.
Documentation of risk management systems built for live portfolios, focusing on infrastructure complexity and production responsibility.
Potentially work I’d done for a blockchain startup validated by the CEO if it strengthens OC3
OC4
AISTATS 2022 publication (within 5 years), with acceptance stats and citation metrics.
A new 2026 arXiv preprint that demonstrates ongoing research activity
Given that most of my work is proprietary, my intention is to rely on design documents with some information redacted + confirmation from letter writers rather than screenshots or code. In your experience, has that generally been sufficient in proprietary environments, or would you suggest strengthening the evidentiary approach further?