The Growing Risk of Third-Party AI
When even Microsoft can't spot fake AI, it's time to examine why third-party AI risk management is failing. Either we're not asking the right questions or we're not understanding the answers.
In May 2025, Builder.ai collapsed overnight, taking $450 million in investor funds with it. The London-based startup had spent eight years selling "AI-powered" app development that was in reality the work of 700 engineers manually writing code. Microsoft invested. Qatar Investment Authority invested. The BBC, Virgin, and NBC signed contracts. Fast Company ranked them the third most innovative AI company in the world, narrowly beaten by OpenAI. They all missed the fraud that a simple Google search would have revealed.
This isn't a story about one spectacular fraud. It's about a systemic failure that threatens every organisation buying AI services: we lack the basic capability to verify AI vendor claims about their viability, security, and governance.
Builder.ai's fraud was elegant in its simplicity. While marketing revolutionary AI automation, they routed customer requests to development centers in India and Ukraine. Engineers were trained to hide their location, time communications to UK hours, and maintain the fiction that AI was doing the work. The company claimed apps were "80% built by AI" when the technology was, according to a lawsuit from their former chief business officer Robert Holdheim1, "barely functional."
The scale of deception only became clear in bankruptcy:
Over $450 million in investor funds evaporating overnight2
Over 1,500 employees laid off
Thousands of businesses left stranded without access to their code or data
Revenue inflated by 300-400% (claiming $220 million when actual revenue was $50-55 million)3
Customer projects spanning years, many still incomplete
The warnings that everyone ignored
What makes this story truly troubling is that the Wall Street Journal exposed the fraud in August 2019, publishing a detailed investigation revealing Builder.ai relied on human engineers, not AI.4. Former employees went on record. An executive sued, alleging deceptive practices. Customer reviews described it as "a con job."
Yet four years later, in May 2023, Microsoft made a "significant" equity investment and announced “a new, deeper collaboration fuelled by Azure AI that will bring the combined power of both companies to businesses around the world.”567.
New customers kept signing contracts, possibly assured by the confidence of Microsoft. Fast Company ranked them among the world's most innovative AI companies, just behind OpenAI and DeepMind8. Their website proudly listed flattering coverage in Forbes, Yahoo Finance, Bloomberg, CNBC, BusinessWire, Fortune, Geekwire, and TechCrunch.
When a fraud this obvious can persist for eight years, sidestep inquiry by journalists, and fool the due diligence of thousands of companies, including even those of Microsoft, we have to confront an uncomfortable truth: our frameworks for evaluating AI vendors are broken. We're either not asking the right questions, or we're choosing not to listen to the answers.
The AI supply chain you didn't know you had
But the Builder.ai collapse exposed more than just one company's fraud - it revealed a gaping hole in how we govern AI procurement, vendor relationships and the opaque supply chain of AI.
One reality I've learned implementing AI governance systems and working with clients to do likewise: every organisation using third-party AI services is now part of a complex, interconnected supply chain. Unlike traditional software, AI systems create unique vulnerabilities that cascade through organisations in ways we're only beginning to comprehend.
The simultaneous outages of ChatGPT, Claude, and Perplexity on June 4, 20249, demonstrated this interconnectedness. What started as a configuration error in OpenAI's Kubernetes infrastructure rippled across the AI ecosystem, leaving millions of businesses unable to access critical AI services for over six hours. Organisations discovered they had no backup plans, no alternative providers ready, and no real understanding of their exposure to AI infrastructure failures.
AI providers operate through layers of dependencies: foundation models from one company, fine-tuning from another, hosting on someone else's infrastructure, with data processing scattered across continents. Your "simple" AI chatbot might depend on OpenAI's models, AWS infrastructure, vector databases from Pinecone, and training data sourced and shared with dozens of providers.
The Builder.ai collapse revealed what happens when you don't have visibility to these dependencies. Thousands of businesses lost access to their code overnight because they never understood their true supply chain. They thought they were buying a product; they were actually buying into a complex web of dependencies they couldn't see or control.
The five critical areas of AI vendor risk
The Builder.ai collapse revealed one type of risk—fraud—but it's just the tip of the iceberg, and possibly not even the most significant. In my experience, I’ve observed five critical risk categories in which organisations appear to be consistently underestimating or missing the risk of third-party AI.
1. Security Risks: When AI becomes the attack vector
Third-party AI models can become a critical vulnerability in enterprise security infrastructure. JFrog researchers discovered at least 100 malicious AI ML model instances on the Hugging Face platform, with some models executing code directly on victim machines and establishing persistent backdoor access10. These malicious models were harbouring payloads that could compromise user environments through code execution attacks11.
The sophistication of these attacks extends beyond simple malware distribution. Mithril Security researchers demonstrated how to surgically modify an open-source model, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks12.
The risks extend beyond just downloading malicious models, you can face exposure when the AI platforms themselves are breached. In June 2024, Hugging Face disclosed a significant security breach of its Spaces platform, where hackers gained unauthorised access to authentication secrets13. The company confirmed that "a subset of Spaces' secrets could have been accessed without authorisation," potentially exposing API keys, tokens, and sensitive credentials of organisations using the platform.
Beyond platform breaches, AI-powered services themselves have become targets and sources of data exposure. While I couldn’t find examples of direct hacks of AI service companies (which actually concerns me as an odd outlier, considering the otherwise prevalence of cybersecurity incidents), the ChatGPT incident in March 2023 is one demonstration of the vulnerability of customer data in AI services. A bug in ChatGPT's Redis library exposed user chat histories and payment information, with some users able to see titles from other users' conversations and potentially view payment details including names, email addresses, and partial credit card numbers14.
2. Data Privacy Risks: When AI handwaves privacy protection
Otter.ai exemplifies the privacy dilemma of AI services. Their services for live transcription of meetings are hugely convenient, but their privacy policy states they train their AI on your "de-identified audio recordings" that "may contain Personal Information."15 The de-identification methods? Proprietary and unauditable. How confidential information is treated? No information forthcoming. You're trusting, not verifying in any meaningful way, trading convenience for heightened risk.
That risk became evident in 2022 when journalist Phelim Kine interviewed Uyghur activist Mustafa Aksu. After the interview, Otter.ai sent a targeted survey that included the name of the activist asking about "the purpose of this particular recording"16, a chilling moment given Chinese government persecution of Uyghurs. Otter's confused response (first confirming, then retracting) highlighted how AI services can inadvertently become surveillance tools.
So if you’re an organisation handling sensitive conversations like boardroom discussions, medical consultations, competitive strategy, or legal discussions, then the question becomes whether you can trust your AI transcription service not to use your data in ways that expose your confidential, private information.
In May 2024, users discovered that Slack's privacy principles stated its systems "analyse customer data (e.g. messages, content, and files)" to develop AI models, with all users opted in by default17. Customers had to proactively email Slack to opt out, raising alarms about the use of internal company communications for AI training without explicit consent. While Slack later clarified that it uses only de-identified aggregate data for non-generative models like emoji recommendations, the incident revealed how workplace AI tools can repurpose sensitive business communications in ways organisations may not fully understand or control.
3. Vendor lock-in risks: when AI providers become captors
When the UK Competition and Markets Authority began investigating AI markets in 2024, they uncovered something troubling. Firms controlling critical inputs for AI development were already positioning themselves to restrict access, shielding themselves from competition18. How where they doing this? By creating dependencies that are nearly impossible to replicate:
Model-specific optimisations that don't transfer
Training data that can't be extracted
Custom configurations with no equivalent elsewhere
Business logic deeply intertwined with specific AI behaviours
Think about what happens when you've spent months training an AI model on a specific platform. Your data becomes entangled in proprietary formats. Your workflows optimise around unique features. Your team develops expertise in platform-specific development and operations tools. Each integration deepens the dependency until extraction becomes not just expensive, but architecturally a complex or even impossible endeavour. OpenAI's GPT fine-tuning illustrates the trap: after spending time training a model on their platform, you can't export the weights, can't replicate the training infrastructure, and can't migrate your customisations to another provider.
4. Business Continuity Risks: When AI providers vanish
The AI graveyard is growing. Tally raised $200 million, helped consumers pay down $2 billion in credit card debt using AI-powered debt management, then shut down overnight in 202419. Customers lost access to their financial management tools with no warning, no data export, no transition plan.
Ghost Autonomy burned through $220 million before shutting down worldwide operations on April 3, 2024.20 Despite backing from OpenAI and promises of revolutionary autonomous driving software, 100 employees lost their jobs and customers were left with nothing but a brief note on the company's website.
InVision, the UX design platform once valued at $2 billion, announced it was shutting down at the end of 202421. Users were warned that all of their data was to be permanently deleted, so thousands of design teams had to scramble to export years of work.
These aren't outliers. They're the predictable outcome of AI companies burning venture capital without sustainable business models. The truth is that 90% of AI startups fail within their first year of operation.22
Your AI vendor's runway might well be shorter than your implementation timeline.
5. Compliance Risks: The regulatory minefield
When you use third-party AI, you inherit compliance obligations that you may not understand, and vendors are increasingly disclaiming responsibility for them. The regulatory landscape is evolving rapidly. The EU AI Act and emerging frameworks worldwide create obligations that cascade through the supply chain. Existing laws in domains of data protection, intellectual property, consumer protection, employment and others are all applicable to AI, even as new regulations specific to AI emerge. So when you integrate a third-party AI service, you're betting that vendor will maintain compliance with regulations they may not understand or that don't even exist yet.
The liability gap is real and growing. Recent analysis from Stanford's CodeX reveals that 92% of AI vendors claim broad data usage rights, only 17% commit to full regulatory compliance, and just 33% provide indemnification for third-party IP claims.23 This represents a stark departure from traditional SaaS contracts, where compliance commitments and indemnification are more normal.
Consider the cascading effects of vendor non-compliance. Italy's €15 million fine against OpenAI in December 2024 for training ChatGPT on personal data without legal basis sends a clear message: This fine stemmed from OpenAI's failure to report a data breach within the required 72-hour window, a critical GDPR requirement under Article 33.24 While OpenAI bears the direct penalty, organisations using ChatGPT for processing customer data now face questions about their own compliance posture.
Healthcare organisations face particular exposure. 71% of healthcare workers still use personal AI accounts for work purposes25, despite HIPAA's strict requirements. When these tools aren't HIPAA-compliant and vendors don't sign business associate agreements, healthcare providers face direct liability. The Equal Employment Opportunity Commission (EEOC) has made it clear that companies using AI products/services for employment decisions, can still be liable under employment discrimination laws even where the product/service is fully developed or administered by a third party vendor26.
The contractual reality is sobering. AI vendor agreements systematically shift risk to customers. Standard contract terms typically contain disclaimers that limit any damages to direct damages with very low dollar liability27 and disclaim liability for compliance failures, making the customer alone responsible for compliance obligations.28
Why Third Party Due Diligence Is Failing
After implementing AI governance both within a big tech company and assisting smaller clients more recently, I've become more sure of an uncomfortable truth: our due diligence processes are theatrically inadequate for AI vendors. The wrong questions are being asked to the wrong people at the wrong time. Certifications and banal questionnaire responses are being deployed as convenient deflections in place of true disclosure and inspection. And even if the right questions are asked, the expertise to evaluate the answers is often lacking.
Failure Cause #1: Questions that don’t work. Many organisations have hastily added AI checkboxes to existing vendor assessments. The typical questions I see, including those from frameworks like FS-ISAC's Generative AI Vendor Risk Assessment Guide29, follow a predictable pattern:
"Do you use AI in your product or service?"
"What type of AI technology do you employ?"
"Do you have an AI ethics policy?"
"How do you ensure compliance with AI regulations?"
“Do you have certification X, Y or Z?”
These questions aren't wrong, they're just largely irrelevant. Builder.ai would have aced this questionnaire. They had policies, compliance frameworks, and could describe their "proprietary AI technology" in impressive detail. The problem? None of it was real. These questions are easily answered with pleasantries and vagaries but they don’t provide meaningful information that can actually verify trustworthiness.
Failure Cause #2: The Expertise Gap. There simply aren’t enough capable people who can ask the right questions, then understand and challenge the answers. Procurement teams know contracts, not code. They can negotiate indemnity clauses but can't distinguish between genuine AI issues and sophisticated technobabble. When Builder.ai claimed "80% autonomous development," procurement teams probably lacked the technical knowledge to challenge it. IT departments have narrowed their focus to integration: Will it work with our systems? What's the API like? Can we get data in and out? They assume the AI works as advertised and focus on making it fit.
Risk managers apply traditional vendor frameworks covering financial stability, data security, and compliance to a fundamentally different challenge. Their assessments reveal whether a vendor has cyber insurance but not whether their "AI-powered" solution is inadvertently disclosing personal data, or even if it contains any AI at all. And finally, perhaps most dangerous of all, executive sponsors can really become a major problem. Under pressure to "implement AI or fall behind," they can override careful evaluation with urgency. "We need to move fast" becomes the enemy of "we need to verify this works." Microsoft's investment in Builder.ai, despite public fraud allegations, exemplifies this dynamic.
Failure Cause #3: The Certification Deflector Shield. In my opinion, the certification industry at present may be making this worse, not better.
Standards like ISO 42001 serve a legitimate and important purpose. They provide organisations with a comprehensive framework for establishing AI governance structures, documenting processes, and implementing management systems. For organisations serious about AI deployment, ISO 42001 offers valuable guidance on building the foundation for responsible AI operations. The certification process itself adds value through external assessment, helping organisations identify gaps in their management systems and verify implementation completeness. This matters for high integrity internal governance and operational maturity, a valuable way to gain expert, external validation and find opportunities to improve. However, this is where the value ends and the problem begins.
Organisations cling to certifications like SOC 2, ISO 27001, and ISO 42001 as proof of capability and legitimacy, but these standards were never designed to verify whether AI actually works as claimed or is truly operated safely, securely or responsibly. Vendors wave the acronyms like talismans against scrutiny, avoiding deeper engagement and the kinds of transparent disclosure that could illuminate real risks. A one page Certification has little more than a scope, date and a company name on it, attesting that a certification body spent a few days reviewing documents and asking questions. But here's what an ISO 42001 accredited certification actually verifies: that you have documented processes for managing AI systems and a minimal level of evidence that those processes are operating. It verifies that you've written policies about data governance, risk assessment, and ethical considerations and that you’ve implemented those processes. What it emphatically does not verify:
Whether your "AI" is actually AI or humans pretending to be AI
Whether your policies and practices are safe or responsible
Whether your algorithms work as claimed
Whether your AI is compliant with regulations or not
Whether your accuracy metrics are real or fabricated
Whether your AI is safe, unbiased, or privacy-preserving in practice
I believe this creates a dangerous dynamic. Enterprise buyers increasingly treat third-party certifications as a gatekeeper for AI and software vendors. Gartner’s 2024 Security Compliance Report found 78 % of enterprise clients now insist on a SOC 2 Type II attestation before signing a contract. Certification also shapes the shortlist: Gartner Digital Markets reports that security certification, reputation or privacy practices were the decisive reason 46 % of organisations chose their most recent software supplier.30 A-LIGN’s 2025 Compliance Benchmark indicates 76 % of companies plan to pursue an AI-specific audit or ISO 42001 certification within the next two years.31
In my experience, when procurement teams see "ISO 42001 Certified, ISO27001 Certified, SOC2 Certified, etc.”, at best they tend to shortcut due diligence, or at worst, assume it is complete. They are frequently unaware of the limitations of these certifications, and less inclined to ask difficult questions. These certifications reveal very, very little about the actual AI performance or safety. Inplementation of these standards, verification of their implementation through a certification process has value - but not as a shield against deeper scrutiny and transparency, as they are so often used.
You might think that international standards in supply chain and third party risk management are helpful, standards like ISO 27036, but unfortunately their utility is limited. Designed for conventional IT relationships, ISO 270236 assumes suppliers provide predictable products with understood risks—not dynamic AI systems with opaque architectures and emergent behaviors. Until standards or practices evolve to address AI's unique characteristics, its opacity, its evolution, its complex supply chains, we're using tools designed for a fundamentally different problem.
The Uncomfortable Questions We Should Be Asking
Forget the checkbox questionnaires, and set aside the badges and certifications. Real third-party AI due diligence requires uncomfortable questions, and more importantly in my view, it requires asking them face-to-face. The most revealing assessment tool I've found is a 30-minute interview with the vendor's engineering team. Not their sales team, not their solutions architects, but the people actually building the technology. In half an hour of direct conversation, you'll learn more about their real capabilities than from any questionnaire or any certification.
Watch for the tells: Do they speak fluently about technical challenges, or do they deflect to marketing speak? Can they explain failures and limitations, or do they claim everything works perfectly? When you ask about edge cases, do they have immediate, specific examples, or do they need to "get back to you"? Will they share what they’ve found in recent algorithmic or adversarial testing? Real engineers building real AI love to talk about the hard problems they're solving. Fake AI vendors will try to keep engineers away from customers.
The vendor risk management industry is beginning to respond to the AI challenge, though most solutions remain in early stages. There are some automated platforms integrating AI-specific risk assessments into their third-party risk management workflows. These tools promise automated questionnaires, continuous monitoring, and AI-driven risk scoring. AI-powered vendor assessment tools now offer capabilities like automated data collection, intelligent risk scoring, predictive risk analysis, and continuous monitoring using machine learning algorithms to identify anomalies.
But here's the uncomfortable truth: these automated tools suffer from the same fundamental flaw as traditional assessments. They can verify a vendor has an AI ethics policy, but not whether their AI is real. They can check for ISO 42001 certification, but as we've established, that proves nothing about actual AI capabilities. Until these assessments include technical validation, actual testing of AI capabilities, they remain not much more than sophisticated theatre.
What do you think is the answer?
I've spent this article dissecting the problems with AI vendor verification, exposing the gaps in our due diligence processes, and suggesting that direct technical interviews might reveal the truth. But being honest: I don't have a scalable solution to this.
The approach I've suggested of 30-minute interviews with engineering teams by skilled technical assessors has obvious limitations. It doesn't scale. There aren't enough qualified assessors to evaluate the thousands of AI vendors flooding the market. Even if there were, the cost and time requirements would be prohibitive for most organisations. And let's be frank: vendors building genuine AI are already overwhelmed with customer demands. Adding mandatory technical deep-dives to every sales process sure won’t be a welcome addition to their work.
So what might work? I see a few potential paths forward, though none are complete solutions:
Radical transparency as competitive advantage: Some vendors might differentiate themselves through unprecedented openness, publishing real performance metrics, issues and incidents, allowing sandbox testing on customer data, providing detailed technical documentation, and yes, making their engineers available for scrutiny. The market could reward this transparency, creating pressure for others to follow.
Regulatory-mandated third-party testing: Governments could require independent technical verification before AI products enter the market, similar to pharmaceutical trials or automotive safety testing. But this path has its own pitfalls: regulatory capture by big tech companies who can afford compliance, innovation-crushing delays for startups, and the risk of ossified testing standards that can't keep pace with AI's evolution. We've seen how medical device approval processes, while ensuring safety, can delay beneficial innovations by years.
Enhanced certification as a middle path What if the problem isn't certification itself, but the poverty of information these certifications provide? Today's AI certifications are binary—pass or fail, compliant or not. They tell us almost nothing useful, they certainly don’t provide meaningful information to inform risk assessment. As an anology in cybersecurity - it's like the difference between receiving a one-page ISO 27001 certificate versus a 30-page unredacted penetration test report—one tells you a company checked some boxes, the other reveals exactly how their defenses held up under near-real attack.
But certifications could evolve into rich, living documents that actually provide useful information.
Imagine certifications that published real performance benchmarks and continuous monitoring data. Technical architecture details, standardised incident reporting that makes failures transparent rather than buried. Audit trails showing what was actually tested, not just what was claimed. Algorithmic audits showing the methodology and results of bias testing.
An enhanced certification model could build on the approaches of ISO42001 and other certifications. It could provide more rigour than today's checkbox exercises while remaining practical for a market with thousands of vendors. The challenge lies in transforming the work of certification bodies from document reviewer into technical assessment, and creating market pressure for vendors to accept this level of transparency.
Insurance as a forcing function: Perhaps the insurance industry could drive change. If AI vendors needed substantial liability coverage, and insurers required genuine technical verification before issuing policies, market forces might succeed where voluntary compliance has failed.
Open source verification tools: The technical community could develop standardised tools and methodologies for testing AI claims—automated ways to verify real performance, measure bias or safety, and assess the adequacy of human-in-the-loop mechanisms. Perhaps these tools could be freely available and made socially unacceptable not to use them.
The uncomfortable reality is that we're in a transitional moment. The old vendor verification playbooks don't work for AI, but we haven't built the new ones yet. Organisations are making massive bets on AI vendors using assessment frameworks designed for simpler times.
What do you think the answer is? How do we balance the need for thorough verification with the reality of market speed and scale? How do we create systems that can distinguish genuine innovation from elaborate deception without strangling progress in red tape?
I don't have all the answers. But I'm certain of this: continuing with our current approach, where certifications substitute for verification, where policies replace proof, where the ability to hype AI matters more than the ability to deliver it, just guarantees more Builder.ai-scale failures.
The cost of getting this wrong isn't just financial. It's the erosion of trust in AI itself. Every fraudulent or reckless vendor that slips through our broken due diligence processes makes it harder for legitimate innovators to be believed. Every organisation burned by fake AI becomes more skeptical of real breakthroughs.
I think we need to ask better questions and demand better answers. Quickly.
What do you think?
https://www.telegraph.co.uk/business/2025/06/01/the-1bn-british-ai-dream-that-collapsed-in-controversy/
https://finance.yahoo.com/news/builder-ais-shocking-450m-fall-170009323.html
https://www.silicon.co.uk/cloud/ai/builder-ai-sales-collapse-615436
https://www.wsj.com/articles/ai-startup-boom-raises-questions-of-exaggerated-tech-savvy-11565775004
https://www.cnbc.com/2023/05/10/microsoft-ramps-up-ai-game-with-bet-on-no-code-startup-builderai.html
https://www.thesaasnews.com/news/builder-ai-receives-investment-from-microsoft
https://techcrunch.com/2025/05/20/once-worth-over-1b-microsoft-backed-builder-ai-is-running-out-of-money/
https://www.fastcompany.com/90846670/most-innovative-companies-artificial-intelligence-2023
https://techcrunch.com/2024/06/04/ai-apocalypse-chatgpt-claude-and-perplexity-are-all-down-at-the-same-time/
https://jfrog.com/blog/data-scientists-targeted-by-malicious-hugging-face-ml-models-with-silent-backdoor/
https://nsfocusglobal.com/ai-supply-chain-security-hugging-face-malicious-ml-models/
https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/
https://www.bleepingcomputer.com/news/security/ai-platform-hugging-face-says-hackers-stole-auth-tokens-from-spaces/
https://www.cshub.com/data/news/openai-confirms-chatgpt-data-breach
https://otter.ai/privacy-policy
https://www.politico.com/news/2022/02/16/my-journey-down-the-rabbit-hole-of-every-journalists-favorite-app-00009216
https://www.polymerhq.io/blog/inside-slacks-ai-training-controversy/
https://www.gov.uk/government/publications/cma-ai-strategic-update/cma-ai-strategic-update
https://www.fintechfutures.com/venture-capital-funding/us-fintech-tally-to-shut-down-after-failing-to-secure-necessary-funding-to-continue-operations
https://www.sunsethq.com/layoff-tracker/ghost-autonomy
https://news.ycombinator.com/item?id=38869127
https://edgedelta.com/company/blog/ai-startup-statistics
https://law.stanford.edu/2025/03/21/navigating-ai-vendor-contracts-and-the-future-of-law-a-guide-for-legal-tech-innovators/
https://www.compliancehub.wiki/top-gdpr-fines-in-december-2024-key-lessons-for-compliance/
https://www.hipaajournal.com/healthcare-workers-privacy-violations-ai-tools-cloud-accounts/
https://www.bytebacklaw.com/2024/08/key-considerations-in-ai-related-contracts/
https://www.americanbar.org/groups/business_law/resources/business-law-today/2023-september/avoiding-ai-agreement-dystopia-managing-key-risks-in-ai-licensing-deals/
https://www.morganlewis.com/blogs/sourcingatmorganlewis/2023/07/contract-corner-contracting-pointers-for-services-incorporating-the-use-of-ai
https://www.fsisac.com/hubfs/Knowledge/AI/FSISAC_GenerativeAI-VendorEvaluation&QualitativeRiskAssessment.pdf
https://www.businesswire.com/news/home/20240215490449/en/2024-Software-Spending-to-Increase-With-Focus-on-AI-Functionality-and-Extra-Security-Gartner-Digital-Markets-Reports
https://www.a-lign.com/service/iso-42001
Brilliant article James
Yes, yes, yes!