Request to review application (Structure, documents and evidences)

Hi all,

I am applying for the UK Global Talent Visa under the Exceptional Talent route as a Technical Founder & Software Engineer. I would really appreciate your feedback on my evidence list and how best to structure it for maximum impact.

Background:
I have 5 years of professional experience as a Software Engineer and recently transitioned into entrepreneurship by founding SavrAI, an AI-powered fashion shopping advisor. I was accepted into the Soonami Accelerator (Germany) and shortlisted for Entrepreneur First (UK, Global Talent fast-track list). I also received an offer for a Master’s in Computer Science at Oxford, which I deferred to focus on building my startup.

Previously, I was Lead Data Scientist & Engineer at Jornee, an AI journaling app, where I led development of AI-driven features.

Beyond employment, I was invited as a speaker at the Data Innovation Summit in Stockholm (3,500+ delegates including NVIDIA, OpenAI, Microsoft), and as a NASA evaluator for robotic arm design solutions in the Astrobee challenge. I also actively mentor in the tech community — recognised as a Top 100 UK Mentor on Topmate with testimonials from mentees, and have guided 100+ developers through open-source programmes like Devscript Winter of Code and Google Summer of Code (mentor invite).

Mandatory Criteria (MC) - recognition & leadership

  • Conference Speaker: Invited speaker at Data Innovation Summit (Stockholm) - 3,500+ delegates with participants from companies like NVIDIA, OpenAI, Microsoft. (Invitation email, agenda page, photos/badge).
  • Letters of Recommendation (3): Senior leaders who know my work ≥12 months (e.g., ex-Sony divisional director; senior engineering lead; startup founder I built with).
  • Media/Community Recognition: Accelerator acceptance (Soonami) and Entrepreneur First finalist email; organiser’s public post highlighting my work.

OC1 – Innovation / technical contributions

  • jornee (AI journaling): I owned the emotion-detection pipeline (BERT/GPT variants), dual-layer taxonomy (primary emotion + unmet need), deployment artifacts, internal demos. (Tech write-ups, diagrams, commit logs, feature screenshots, product results/testimonials).
  • SavrAI (founder): MVP showcasing AI review summarisation + style recommender; product architecture, feature demos, before/after UX. (Repo extracts, diagrams, product video).

OC2 – Contributions outside employment / open source / mentoring

  • Mentoring (career & technical):
    • Recognised as a Top 100 UK Mentor on Topmate. Proof of mentee testimonials (e.g., one confirmed it helped him crack a first-round interview + later collaborated with me on SavrAI).
    • Mentored 100+ developers via DWoC (certificate + screenshots of contributions).
    • Google Summer of Code (GSoC) mentor invitation email.
  • Linux Foundation (Hyperledger) intern contributor: PRs, issues, commit history, mentor confirmation.
  • NASA Astrobee evaluator: Selection email + evaluation brief.

OC3 – Impact / commercial traction

  • SavrAI traction: Google Analytics screenshots (engagement, retention), early user testimonials, social proof (TikTok/LinkedIn inbound & waitlist).
  • Sony impact (non-confidential): Verification lead for TV Channel Editor for BRAVIA (100k+ installs); process/automation initiative that cut issue TAT and accelerated releases. (Awards email, internal recognition, public app store links; no sensitive code).

Hi @Pranamika_Pandey

You have some good elements to form the application however need a lot of strengthening across criteria evidences to be able to create a successful application.

Fast track option has been removed per the latest changes in August.

LORs are not part of MC but a separate section of their own. Ensure they write about your specific contributions and impact.

MC: only one conference invite can be flagged as a one-off event. TN usually looks for a track record of such events hence having 1-2 more public evidences will strengthen this. The second MC evidence of accelerator acceptances and email are not considered as media recognition and are also not strong acceptable evidences for MC. Media recognition is when you have been covered in news mentions or PR mentions, etc.

In OC1, apart from the current self-claims, please add acceptable third-party verifications like reference letters, market traction of these innovative products and any news mentions. It’s important to show why were these innovative, your contribution to the innovations, the impact of these innovations and third party verification of your contributions and impact.

In OC2, Topmate is not an acceptable mentorship as it’s a platform for 1:1 expert calls without any program structure or selection on either mentor or mentee end, it’s a platform open to anyone. Online mentorships and such platforms are not considered. They consider offline mentorships, ones with program structure and selection of mentees. Linux contribution and NASA evaluator email are not valid evidences for OC2. I wouldn’t recommend you to do OC2 as you don’t have strong evidences as per guideline requirements.

In OC3, please ensure backing your self-claims with third party verifications like reference letters from senior execs. Your contribution should be clear along with quantified impact of what you did. Currently your OC3 lacks the impact element as well as third-party verification.

2 Likes

Your application shows good potential but needs significant strengthening before submission. The conference speaking invitation is a solid start, but having only one event can appear as a one-off rather than demonstrating consistent recognition. I’ve seen successful applications that included multiple speaking engagements or media mentions to build a stronger pattern of industry recognition.

Your OC2 evidence needs major revision since Topmate mentoring won’t qualify under current guidelines. The platform lacks the formal structure and selection criteria that assessors look for in valid mentorship programs. Focus instead on your Linux Foundation contributions and NASA evaluator role, but make sure you have concrete proof of impact rather than just participation emails. I’ve reviewed applications where informal mentoring platforms led to immediate rejections.

For OC3, you must provide third-party verification for all your technical claims about SavrAI and Sony work. Self-reported metrics without supporting reference letters from senior executives won’t meet the evidence standard. Include specific quantified impact data with proper documentation, and ensure your Sony contributions clearly demonstrate innovation beyond routine job duties. Applications without external validation of technical achievements consistently face rejection.

2 Likes

@pahuja already mentioned most of the points i have i mind for MC and OC2, I also think your OC3 will be stronger if you have any evidence revenue traction to demonstrate impact.

3 Likes

Hi all,
Thanks @pahuja @Akash_Joshi @Francisca_Chiedu @Raphael for suggestions on my evidences earlier this year. I’m posting an updated list of my revised evidence pack that I have ready. I have made substantial changes based on your earlier feedback (removing weak evidence, merging related items, adding third-party verification, clarifying innovation vs impact, etc.). Below is my final structure. Would appreciate any last guidance before I submit.


Mandatory Criteria (MC)

MC1 – Speaking Recognition (International + UK)

Evidence combined:

  • Data Innovation Summit 2025 (Stockholm) — invitation email, agenda listing, badge/photos, event stats (3,500+ delegates, 1,500+ companies).
  • GirlsWhoML London Featured Speaker — (I spoke about my experience and insights building startup) Luma event link, 53 attendees (majorly women), unsolicited attendee post praising my insights, before/after LinkedIn engagement.

How this meets MC1:
Shows a pattern of recognition (not a one-off), one high-profile international event + one UK community event with external validation.


MC2 – Prestigious Accelerator Selections

  • Soonami Accelerator — acceptance email, “Top Team” email, demo-day investor introductions.
  • Antler Final Selection — final round selection email (<3% acceptance rate).

How this meets MC2:
Competitive merit-based selection demonstrating external industry recognition.


MC3 – Hyperledger Open Source Contribution (Linux Foundation)

  • <7% acceptance global open-source internship (LFOS).
  • Contributions to Hyperledger Climate Action & Accounting SIG.
  • Merged PRs, CI/CD pipeline improvements, later contributors referencing my work.
  • Project later integrated into IBM Call for Code 2022 winner.
  • GitHub history showing multi-year contributions to major open source projects (dating back to 2019).

How this meets MC3:
Significant contribution to an open-source digital tech project with external validation, recognised by maintainers.


Optional Criteria 1 (OC1) — Innovation

OC1-A – SavrAI Product Innovation

  • Proprietary AI reasoning engine (deal-breaker rules, decision trees, domain-specific logic).
  • Cost-per-wear intelligence module.
  • Architecture diagrams + code excerpts (non-confidential).
  • Third-party validation: accelerator selection + investor engagement.

OC1-B – Sony “kikAI” Automation System (Innovation inside employment)

  • First AI-based computer-vision automation system for BRAVIA testing.
  • Core technical design: image pipeline, CV workflows, automated multi-model testing.
  • Adopted across 22 European regions for every BRAVIA release since 2022.
  • Third-party verification: letters from ex-Divisional Director + senior engineer (also a stakeholder).

Optional Criteria 3 (OC3) — Impact

OC3-A – SavrAI Commercial Traction

  • Stripe payment confirmation (first customer payment).
  • Google Analytics: ~570 new users + 95 returning, 100% organic traffic.
  • Global adoption across cities (UK, US, UAE, India).
  • User testimonials and unsolicited inbound interest.
  • Investor interest (emails confirming deck reviews + follow-ups).

OC3-B – Sony TV Channel Editor App (Launched to 100K+ users)

  • Led end-to-end release-readiness and verification engineering.
  • 64+ test cycles, 231 issues coordinated.
  • App reached 100K+ downloads across EU with postive reveiws on PlayStore/appstore and zero critical post-launch issues.
  • Received Sony FY22 Technical Excellence Award (internal recognition for engineering quality).

OC3-C – kikAI Requirements Delivery (High-impact internal contribution)

  • Delivered 50+ cross-functional automation requirements for testing across 22 markets.
  • Included handling a major escalated market quality issue: 4 weeks → 2–3 days reproduction.
  • Verified by senior engineer (the requirement stakeholder) in reference letter.

Would appreciate feedback on:

  1. Whether MC3 (Hyperledger open source contribution) is placed correctly or should it go as OC2.
  2. I merged speaking at 1 high-profile international conference (Data innovation) with 500+ attendees and one other GirlsWhoML event I hosted and spoke at in the UK with <100 (53) attendees but with high impact, with MC1. Does that make MC1 stronger?
  3. Whether having two OC1 + three OC3 evidences is acceptable or should I limit to exactly two per criteria.
  4. Any red flags remaining or areas needing further strengthening.

Thank you all again for your time and support!

1 Like