4/14/2026
Report ID: FC-20260414-170615
FirstCheck.App
Third Party Risk Assessment
Subject Entity
Anthropic
Industry: Technology  |  Jurisdiction: US

THIRD PARTY RISK ASSESSMENT REPORT
REQUESTER INFORMATION
FirstCheck.App, Sample Report
Purpose: New Entity Check
Report Generated:April 14, 2026
Entity Analyzed:Anthropic
Jurisdiction:US
Relationship Type:Integrated Supplier / Subcontractor
Client Industry:Technology
Areas of Special Interest:Litigation & Legal Exposure, Technology, IP & Data Risk, Defense & Export Controls, Compliance Vulnerability, Reputational & Media Coverage
FIRSTCHECK.APP AND METHODOLOGY
This report was compiled using FirstCheck.App's proprietary Third-Party Risk Assessment Methodology, leveraging structured open-source research across publicly available databases, sanctions lists, corporate registries, and authoritative media sources. All findings are governed by FirstCheck.App Integrity Standards which require source transparency, cross-reference corroboration, verified findings, analytical neutrality, and verification transparency. Full standards are available at FirstCheck.app. Risk indicators in the report use a five-level color system (Red, Orange, Yellow, Green, and Insufficient Data) defined in Appendix B of this report. The overall risk indicator reflects the highest risk identified. Conclusions drawn from this report should be validated through further investigation, direct inquiry, and professional judgment before any business or compliance decision is made.
CONFIDENTIALITY
This report is confidential and proprietary. It has been prepared exclusively for the requesting party and their authorized representatives. It may not be shared, reproduced, distributed, or disclosed to any third party without the express written authorization of the requester. Unauthorized use or disclosure may violate applicable law.

TABLE OF CONTENTS
Executive Summary
1.   Entity Information
2.   Ownership & Structure
3.   Key Personnel
4.   Sanctions & Controls Screening
5.   Regulatory & Legal
6.   Adverse Media
7.   Financial Assessment
8.   Geopolitical Risk
9.   Industry-Specific Risks
10.   Certifications & Accreditations
11.   Conflicts of Interest
12.   Related & Associated Entities
13.   Areas of Special Interest
Summary Risk Assessment
Recommended Follow-Up Questions
Sources Consulted
Limitations & Recommended Next Steps
Disclaimer
Appendix A — Sanctions & Controls Databases Screened
Appendix B — Risk Rating Methodology
Third-Party Risk Review Form

Note: This report does not include page numbers as section breaks vary by browser and device.


EXECUTIVE SUMMARY

Anthropic PBC is an American artificial intelligence company headquartered in San Francisco that has developed a range of large language models named Claude. The company was founded in 2021 by former members of OpenAI, including siblings Daniela Amodei and Dario Amodei, who are president and CEO, respectively. As of February 2026, Anthropic has an estimated value of $380 billion. The overall risk indicator is Orange due to the unprecedented designation as a "supply chain risk" by the U.S. Department of Defense in March 2026, making it the first American company ever to receive this designation that has historically been reserved for foreign adversaries. As an integrated supplier or subcontractor relationship, the regulatory and contractual limitations with federal agencies could impact operational dependencies and flow-down compliance obligations.

Sanctions Screening: No Matches Identified

Regulatory Risk: Major DoD supply chain risk designation; ongoing litigation challenging federal restrictions

Adverse Media: Significant coverage regarding Pentagon conflicts and data security incidents

Financial Risk: Well-funded with $380B valuation; $30B annual revenue but operating at losses

Overall Risk Indicator: Red - Based on highest section-level risk finding

Key Risk Factors:

DoD designated company as "supply chain risk" - first U.S. company to receive this designation

Multiple copyright infringement lawsuits totaling billions in claimed damages

Recent security incidents including source code leaks and unauthorized AI-enabled cyberattacks

Federal government contractual restrictions may impact downstream compliance obligations

Ongoing litigation challenging federal agency restrictions on technology use

1. ENTITY INFORMATION

Name: Anthropic PBCCountry: United States

Business Type: Public Benefit Corporation Website: https://www.anthropic.com Industry: Artificial Intelligence Safety and Research Headquarters: San Francisco, California The company operates as a registered trademark owned by Anthropic, PBC, a San Francisco based entity located in CA. The trademark was filed on 06 Nov 2023 with serial number (#98256909) and registered on 01 Jul 2025.

Anthropic PBC is registered with the Office of the Treasurer and Tax Collector of San Francisco at 731 Sansome St Fl 5, San Francisco, CA 94111, with the business starting on September 27, 2021. The entity was incorporated on August 30, 2022 in Delaware.

2. OWNERSHIP & STRUCTURE

Anthropic was founded in 2021 by former members of OpenAI, including siblings Daniela Amodei and Dario Amodei, who are president and CEO, respectively.

Anthropic operates as a public benefit corporation. Anthropic's "Long-Term Benefit Trust" is a purpose trust for "the responsible development and maintenance of advanced AI for the long-term benefit of humanity". It holds Class T shares in the PBC, which allow it to elect directors to Anthropic's board. As of October 2025, the members of the Trust are Neil Buddy Shah, Kanika Bahl, Zach Robinson, and Richard Fontaine.

Key investors include Amazon ($8B total), Google ($3B total), Microsoft (up to $5B), Nvidia (up to $10B), ICONIQ, Lightspeed, Fidelity, Spark Capital, Salesforce Ventures, Menlo Ventures, and Bessemer Venture Partners.

Amazon remains a minority investor and does not have a board seat. The company's governance structure places a cap on any single investor's voting power, ensuring that neither Amazon nor Google can exert outsized influence.

RISK INDICATOR: Green - Well-documented corporate structure with appropriate governance safeguards

3. KEY PERSONNEL

Dario Amodei (born 1983) is an American artificial intelligence (AI) researcher and entrepreneur. In 2021, he and his sister Daniela Amodei co-founded Anthropic, the company behind the large language model series Claude. Prior to that, he was the vice president of research at OpenAI.

He completed his undergraduate studies in physics at Stanford University and earned a PhD in physics from Princeton University, where he focused on electrophysiology of neural circuits. Amodei's professional experience includes working at Baidu, Google, and serving as the Vice President of Research at OpenAI, where he played a crucial role in the development of GPT-2 and GPT-3.

Unlike Dario, his younger sister Daniela does not have a background in science and research. Instead, she earned her Bachelor's in English Literature, Politics, and Music from the University of California. However, her journey in Silicon Valley started when she started working at Stripe, an online finance and payment startup. She worked there for five years in several positions and left as a Risk Manager in 2018.

Other OpenAI employees left to start Anthropic as co-founders, including Benjamin Mann (Head of Anthropic Labs), Jared Kaplan (Chief Science Officer), Jack Clark, Sam McCandlish (Chief Architect), Tom Brown, and Christopher Olah (Interpretability Research Lead).

RISK INDICATOR: Green - Well-qualified leadership team with relevant industry experience

4. SANCTIONS & CONTROLS SCREENING

This section reflects screening conducted across various sanctions, controls and watchlist databases. A complete list of these databases is provided in Appendix A. Individual databases are identified in this section only when a match or potential match is found. No listing means no matches for this entity were found.

IMPORTANT DISCLAIMER: This screening is based on open-source web research conducted at the time of report generation. FirstCheck.App does not directly query sanctions databases in real time. Sanctions listings change frequently. The requesting party must conduct independent direct screening against all applicable databases before entering into any business relationship or transaction. Reliance on this report without independent verification does not constitute a defense to sanctions violations.

SCREENING TIMESTAMP: List checks performed on April 14, 2026 at 10:01:35 PM UTCResults: No matches were identified for Anthropic PBC or its key executives across the databases listed in Appendix A.

RISK INDICATOR: Green - No adverse findings identified in regulatory databases

5. REGULATORY & LEGAL

The DOD officially designated Anthropic a supply chain risk in early March and said the company threatened national security. Anthropic is the first American company to be given the designation, which has historically been reserved for foreign adversaries. The formal declaration will require defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon.

A federal judge in California has indefinitely blocked the Pentagon's effort to "punish" Anthropic by labeling it a supply chain risk, ruling that those measures ran roughshod over its constitutional rights. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," US District Judge Rita Lin wrote in a stinging 43-page ruling. The DOD and Anthropic were unable to come to terms on how the company's AI technology could be used, specifically in relation to autonomous weapons and domestic surveillance. Anthropic wanted assurance that its technology would not be tapped for fully autonomous weapons or domestic mass surveillance, but the DOD wanted Anthropic to grant the agency unfettered access to Claude across all lawful purposes.

Anthropic filed suit against the Department of Defense on Monday after the agency labeled it a supply-chain risk. Anthropic called the DOD's actions "unprecedented and unlawful" and accuses the administration of retaliation in a complaint filed in San Francisco federal court. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the lawsuit reads.

RISK INDICATOR: Orange - Major regulatory dispute with federal agencies creating operational restrictions

6. ADVERSE MEDIA

The case, Concord Music Group v. Anthropic, was brought in October 2023 by several music publishers alleging that Anthropic unlawfully used copyrighted musical works, particularly a large corpus of song lyrics, for training its AI product Claude.

A cohort of music publishers led by Concord Music Group and Universal Music Group are suing Anthropic, saying the company illegally downloaded more than 20,000 copyrighted songs, including sheet music, song lyrics, and musical compositions. The publishers said in a statement on Wednesday that the damages could amount to more than $3 billion, which would be one of the largest non-class action copyright cases filed in U.S. history. In that case, Judge William Alsup ruled that it is legal for Anthropic to train its models on copyrighted content. However, he pointed out that it was not legal for Anthropic to acquire that content via piracy. This ruling could be significant for the music publishers' case, as it establishes that while AI training may be fair use, obtaining copyrighted material through piracy is not protected.

China-backed threat actors leveraged Anthropic's AI systems to significantly automate breaches against roughly 30 corporate and government targets. The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets. To keep pace with this rapidly-advancing threat, we've expanded our detection capabilities and developed better classifiers to flag malicious activity. We're continually working on new methods of investigating and detecting large-scale, distributed attacks like this one.

RISK INDICATOR: Orange - Significant copyright litigation and security incidents requiring ongoing management

7. FINANCIAL ASSESSMENT

On February 12, 2026, Anthropic announced that it had raised $30 billion in a Series G funding round, bringing its post-money valuation to $380 billion.

Anthropic has secured approximately $64 billion in total funding through multiple investment rounds, attracting both financial institutions and strategic partners. Most recently, in February 2026, Anthropic closed a $30 billion Series G led by GIC and Coatue at a $380 billion post-money valuation — the second-largest private tech funding round of all time.

Sacra estimates that Anthropic hit $30B in annualized revenue in March 2026, up about 1,400% year-over-year and up from $9B at the end of 2025. Enterprise and startup API calls continue to drive the majority of revenue through pay-per-token pricing.

Like its competitors, Anthropic continues to operate at a significant loss as it invests heavily in research, model development, and compute infrastructure. The long path to profitability hasn't deterred investors, who view these companies as platforms that could fundamentally transform how humans interact with technology.

Anthropic has begun preparations for a potential IPO as soon as 2026, hiring Silicon Valley law firm Wilson Sonsini to advise on the process, though the company has not decided when or whether it will go public. At a December 2025 event, Chief Communications Officer Sasha de Marigny said there were "no immediate plans to go public."

RISK INDICATOR: Yellow - Strong financial backing but operating losses and high capital intensity

8. GEOPOLITICAL RISK

Anthropic PBC is an American artificial intelligence company headquartered in San Francisco. The United States maintains strong rule of law, robust regulatory frameworks, and low corruption levels. However, the company faces unique geopolitical challenges due to the strategic nature of AI technology and ongoing tensions with federal agencies. In September 2025, Anthropic announced that it would stop selling its products to groups majority-owned by Chinese, Russian, Iranian, or North Korean entities due to national security concerns.

Anthropic is the first American company to be given the designation, which has historically been reserved for foreign adversaries. The label, leveled by the Pentagon last month, had previously been used only for companies seen as connected to foreign adversaries.

Jurisdictional Environment: Strong rule of law, robust regulatory framework, low corruption in primary jurisdiction

RISK INDICATOR: Yellow - Tier 1 jurisdiction but unusual federal agency conflicts create regulatory uncertainty

9. INDUSTRY-SPECIFIC RISKS (Technology)

a) EXPORT CONTROLS

No evidence identified of BIS Entity List, Denied Persons List, or Unverified List status for Anthropic PBC. The DOD and Anthropic were unable to come to terms on how the company's AI technology could be used, specifically in relation to autonomous weapons and domestic surveillance. The company's AI models may be subject to export control regulations under EAR classifications, but no specific violations were identified in public records.

b) SANCTIONS SCREENING

No OFAC SDN List or Sectoral Sanctions matches identified. In September 2025, Anthropic announced that it would stop selling its products to groups majority-owned by Chinese, Russian, Iranian, or North Korean entities due to national security concerns. The company has implemented geographic restrictions consistent with U.S. sanctions policies.

c) DATA PRIVACY

No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach.

"These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture," the spokesperson said.

No evidence of GDPR, CCPA, or other major data privacy enforcement actions identified.

d) CFIUS/FOREIGN INVESTMENT

No evidence of CFIUS reviews or foreign investment concerns identified in public records.

Amazon remains a minority investor and does not have a board seat. The company's governance structure places a cap on any single investor's voting power, ensuring that neither Amazon nor Google can exert outsized influence.

e) IP & TRADE SECRETS

The case, Concord Music Group v. Anthropic, was brought in October 2023 by several music publishers alleging that Anthropic unlawfully used copyrighted musical works, particularly a large corpus of song lyrics, for training its AI product Claude.

A cohort of music publishers led by Concord Music Group and Universal Music Group are suing Anthropic, saying the company illegally downloaded more than 20,000 copyrighted songs. The publishers said the damages could amount to more than $3 billion.

f) CYBERSECURITY

China-backed threat actors leveraged Anthropic's AI systems to significantly automate breaches against roughly 30 corporate and government targets. The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets. The source code leak exposed around 500,000 lines of code across roughly 1,900 files. The latest data leak is potentially more damaging to Anthropic than the earlier accidental exposure of the company's draft blog post about its forthcoming model.

g) GOVERNMENT CONTRACTS (if applicable)

In November 2024, Anthropic partnered with Palantir and Amazon Web Services to provide the Claude model to U.S. intelligence and defense agencies. In June 2025, Anthropic announced a "Claude Gov" model.

Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled.

h) AI/EMERGING TECH (if applicable)

Anthropic focuses on AI safety. Anthropic's mission emphasizes a safety-focused approach to AI research and development, positioning its corporate identity as a public benefit corporation (PBC)—a for-profit venture aimed at making a positive public impact.

It applies this principle through a training approach known as "constitutional AI," a technique that trains AI systems to follow a structured set of ethical and safety guidelines. Claude models are known for their long-context processing, structured reasoning capabilities, and comparatively cautious responses.

RISK INDICATOR: Orange - Multiple security incidents and major copyright litigation pose ongoing risks

10. CERTIFICATIONS & ACCREDITATIONS

a) QUALITY & MANAGEMENT SYSTEMS

No evidence identified of ISO 9001, ISO 45001, ISO 14001, ISO 22301, or other quality management system certifications in publicly available sources.

b) INFORMATION SECURITY

No evidence identified of ISO 27001 or SOC 2 certifications in publicly available sources. To keep pace with this rapidly-advancing threat, we've expanded our detection capabilities and developed better classifiers to flag malicious activity. We're continually working on new methods of investigating and detecting large-scale, distributed attacks like this one.

c) INDUSTRY-SPECIFIC CERTIFICATIONS

In June 2025, Anthropic announced a "Claude Gov" model. Ars Technica reported that as of June 2025 it was in use at multiple U.S.

government agencies, suggesting some level of government security clearance or certification.

d) PROFESSIONAL ACCREDITATIONS & MEMBERSHIPS

No evidence identified of BBB accreditation, industry body memberships, or professional organization affiliations in publicly available sources.

e) CERTIFICATION CURRENCY

Limited public information available regarding specific certifications. In June 2025, Anthropic announced a "Claude Gov" model

indicating active government certification processes, but specific certification statuses could not be confirmed through public sources.

RISK INDICATOR: Insufficient Data - Limited public certification information available for private company

11. CONFLICTS OF INTEREST

a) SELF-DEALING

No evidence identified of self-dealing by management or owners, or transactions between entity and entities controlled by management.

b) RELATED PARTY TRANSACTIONS

No evidence identified of related party transactions that appear non-arm's length or undisclosed interests in suppliers/customers. The company's governance structure places a cap on any single investor's voting power, ensuring that neither Amazon nor Google can exert outsized influence.

c) GOVERNANCE CONCERNS

Anthropic was founded in 2021 by former members of OpenAI, including siblings Daniela Amodei and Dario Amodei, who are president and CEO, respectively. The sibling leadership structure could present governance concerns, though

Anthropic's "Long-Term Benefit Trust" is a purpose trust that holds Class T shares in the PBC, which allow it to elect directors to Anthropic's board.

d) OWNERSHIP CONFLICTS

No evidence identified of ownership interests in competing businesses or undisclosed beneficial ownership relationships.

e) RELATIONSHIP-SPECIFIC CONCERNS

The DOD and Anthropic were unable to come to terms on how the company's AI technology could be used, specifically in relation to autonomous weapons and domestic surveillance. For defense or government contractors, the company's restrictions on certain AI use cases could create conflicts with client requirements.

RISK INDICATOR: Yellow - Sibling leadership and AI use restrictions may create governance concerns

12. RELATED & ASSOCIATED ENTITIES

a) PARENT COMPANY

No parent or holding company identified. Anthropic PBC operates as an independent entity.

b) SUBSIDIARIES

In January 2026, Anthropic introduced a division called "Labs", with Mike Krieger (formerly the company's Chief Product Officer) joining it. In March 2026, Anthropic launched the Anthropic Institute, a think tank led by Jack Clark studying AI.

c) AFFILIATES & JOINT VENTURES

In November 2024, Anthropic partnered with Palantir and Amazon Web Services to provide the Claude model to U.S. intelligence and defense agencies. As part of that investment, Amazon Web Services became Anthropic's "primary cloud and training partner." Anthropic has used Amazon Web Services' Trainium and Inferentia chips to train and deploy its largest AI models since then.

d) SIGNIFICANT SHAREHOLDERS

Key investors include Amazon ($8B total), Google ($3B total), Microsoft (up to $5B), Nvidia (up to $10B), ICONIQ, Lightspeed, Fidelity, Spark Capital, Salesforce Ventures, Menlo Ventures, and Bessemer Venture Partners.

NOTE: This section identifies structural relationships only. All risk findings in this report pertain exclusively to Anthropic PBC and not to related entities unless explicitly noted.

RISK INDICATOR: Green - Clear corporate structure with well-documented strategic partnerships

13. AREAS OF SPECIAL INTEREST

13a) Litigation & Legal Exposure

The case, Concord Music Group v. Anthropic, was brought in October 2023 by several music publishers alleging that Anthropic unlawfully used copyrighted musical works, particularly a large corpus of song lyrics, for training its AI product Claude.

A cohort of music publishers led by Concord Music Group and Universal Music Group are suing Anthropic, saying the company illegally downloaded more than 20,000 copyrighted songs. The publishers said the damages could amount to more than $3 billion, which would be one of the largest non-class action copyright cases filed in U.S. history. The Parties have negotiated a proposed class settlement intended to resolve the pending class action litigation in the district court. The Parties memorialized the core terms of that proposed settlement in a binding term sheet dated August 25, 2025. Financial terms of the settlement were not disclosed.

Anthropic filed suit against the Department of Defense on Monday after the agency labeled it a supply-chain risk. Anthropic called the DOD's actions "unprecedented and unlawful" and accuses the administration of retaliation in a complaint filed in San Francisco federal court. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the lawsuit reads.

RISK INDICATOR: Orange - Major ongoing copyright litigation with billions in potential damages and constitutional challenge against federal agencies

13b) Technology, IP & Data Risk

The source code leak exposed around 500,000 lines of code across roughly 1,900 files. The latest data leak is potentially more damaging to Anthropic than the earlier accidental exposure of the company's draft blog post about its forthcoming model.

AI company Anthropic has inadvertently revealed details of an upcoming model release, an exclusive CEO event, and other internal data, including images and PDFs, in what appears to be a significant security lapse.

This was a release packaging issue caused by human error, not a security breach. These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture.

China-backed threat actors leveraged Anthropic's AI systems to significantly automate breaches against roughly 30 corporate and government targets. The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets. To keep pace with this rapidly-advancing threat, we've expanded our detection capabilities and developed better classifiers to flag malicious activity. We're continually working on new methods of investigating and detecting large-scale, distributed attacks like this one.

RISK INDICATOR: Orange - Multiple security incidents including nation-state exploitation and proprietary code leaks

13c) Defense & Export Controls

The DOD officially designated Anthropic a supply chain risk in early March and said the company threatened national security. Anthropic is the first American company to be given the designation, which has historically been reserved for foreign adversaries. The formal declaration will require defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon.

No BIS Entity List, Denied Persons List, or Unverified List status identified. In November 2024, Anthropic partnered with Palantir and Amazon Web Services to provide the Claude model to U.S. intelligence and defense agencies. In June 2025, Anthropic announced a "Claude Gov" model. Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled. The DOD and Anthropic were unable to come to terms on how the company's AI technology could be used, specifically in relation to autonomous weapons and domestic surveillance. Anthropic wanted assurance that its technology would not be tapped for fully autonomous weapons or domestic mass surveillance, but the DOD wanted Anthropic to grant the agency unfettered access to Claude across all lawful purposes.

RISK INDICATOR: Red - Unprecedented DoD supply chain risk designation creates major defense contracting limitations

13d) Compliance Vulnerability

The mistake looks like a "human error" after someone took a shortcut that bypassed normal release safeguards. Anthropic told Fortune that normal release safeguards were not bypassed. "Usually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open," he told Fortune. "At Anthropic, it seems that the process wasn't in place and a single misconfiguration or misclick suddenly exposed the full source code." The issue appears to stem from how the CMS used by Anthropic works. All assets—such as logos, graphics, or research papers—that were uploaded to the central data store were public by default, unless explicitly set as private. The company appeared to have forgotten to restrict access to some documents that were not supposed to be public. In that case, Judge William Alsup ruled that it is legal for Anthropic to train its models on copyrighted content. However, he pointed out that it was not legal for Anthropic to acquire that content via piracy. This ruling could be significant for the music publishers' case, as it establishes that while AI training may be fair use, obtaining copyrighted material through piracy is not protected.

RISK INDICATOR: Orange - Multiple compliance process failures and judicial findings regarding content acquisition practices

13e) Reputational & Media Coverage

Now, the incident is raising new concerns in Washington about national security and the role of AI in defense. Representative Josh Gottheimer, a New Jersey Democrat and leading House voice on AI and cybersecurity, wrote to Anthropic CEO Dario Amodei on Thursday warning of potential risks tied to the leak. His message reflects mounting pressure from lawmakers as AI systems become increasingly embedded in U.S. defense and intelligence operations. Dean

Ball, a former Trump White House AI adviser, has referred to the designation as a "death rattle" of the American republic, arguing government has abandoned strategic clarity and respect in favor of "thuggish" tribalism that treats domestic innovators worse than foreign adversaries. Hundreds of employees from OpenAI and Google have urged the DOD to withdraw its designation and called on Congress to push back on what could be perceived as an inappropriate use of authority against an American technology company.

Usama Fayyad, senior vice provost for AI and data strategy at Northeastern University, said the U.S. government's escalation against Anthropic has set a "bad precedent." "It is not clear if it's legal or will stand, but it will cause major economic, scientific and engineering damage as everyone freezes in fear and the U.S.

loses its competitive edge in AI development.

RISK INDICATOR: Orange - Significant negative coverage from security incidents and federal conflicts generating Congressional concern

RISK ASSESSMENT

Key Risk Factors:

• DoD supply chain risk designation unprecedented for U.S. company

• Multiple billion-dollar copyright infringement lawsuits pending

• Recent security incidents including source code leaks and nation-state exploitation

• Federal agency restrictions may impact downstream contractor relationships

• Compliance process gaps evidenced by multiple data exposure incidents

Recommendations:

1. Obtain written confirmation regarding any DoD supply chain risk designation impact on contract performance 2. Review all AI technology use restrictions and flow-down compliance obligations 3. Assess cybersecurity posture given recent incidents and nation-state targeting 4. Monitor ongoing copyright litigation and potential settlement terms 5. Evaluate business continuity plans regarding federal agency access restrictions

Monitoring Needs:

• DoD supply chain risk litigation outcomes and regulatory changes

• Copyright infringement case resolution and financial impacts

• Additional security incidents or nation-state targeting

• Changes to federal agency AI technology policies

• Congressional or regulatory responses to Pentagon designation

Overall Risk Indicator: Red - Based on highest section-level risk finding

Recommended Action: Proceed with enhanced due diligence and legal review of federal compliance obligations
RECOMMENDED FOLLOW-UP QUESTIONS

Based on the findings in this report, the following questions should be addressed through direct inquiry with the entity or additional research:

1. What specific measures has Anthropic implemented to address the security process failures that led to the Claude Code source code leak and CMS configuration issues?

2. How does the DoD supply chain risk designation specifically impact Anthropic's ability to work with defense contractors and what flow-down restrictions apply to integrated suppliers?

3. What is the current status and expected resolution timeline for the major copyright infringement lawsuits, particularly the $3 billion music publishers case?

4. What cybersecurity enhancements have been implemented following the nation-state exploitation of Claude systems by Chinese threat actors?

5. How does Anthropic's restriction on autonomous weapons and domestic surveillance use cases affect integration with defense-related client requirements?

6. What specific government certifications or clearances does Anthropic maintain for its Claude Gov model and federal agency work?

7. How does the ongoing constitutional litigation challenging federal agency restrictions impact current and future commercial relationships?

8. What business continuity plans are in place to address potential expansion of federal agency AI technology restrictions?

9. How does Anthropic's Public Benefit Corporation structure and Long-Term Benefit Trust governance affect operational decision-making and investor control?

10. What financial impact has the DoD designation and federal agency restrictions had on current revenue and customer relationships?

SOURCES CONSULTED

Government & Regulatory Databases:

• OFAC SDN List - No matches found

• BIS Entity List - No matches found

• SEC EDGAR - No enforcement actions identified

• SAM.gov - No debarment records found

• Federal court records (PACER/CourtListener) - Multiple active cases identified

Court & Legal Records:

• Federal district court filings regarding DoD litigation

• Copyright infringement case records (Concord Music Group v. Anthropic)

• Authors' class action settlement records

News & Media:

• CNBC, Fortune, TechCrunch, Reuters, Bloomberg, Financial Times

• The Hill, Axios, CNN Business

• Industry publications (Music Business Worldwide, TechPolicy.Press)

Business Registries & Financial:

• California corporate registry records

• Delaware incorporation records

• San Francisco business registration records

• Private company valuation and funding reports

Industry-Specific Sources:

• AI security research reports (Anthropic threat intelligence)

• Cybersecurity incident analysis (Obsidian Security, Proofpoint)

• Legal analysis from law firm publications (Mayer Brown, Paul Weiss)

LIMITATIONS & RECOMMENDED NEXT STEPS

This report is based on publicly available information accessible through web search. The following limitations apply:

Information Not Accessible:

• Real-time sanctions database queries

• Detailed financial statements (private company)

• Confidential litigation settlement terms

• Internal cybersecurity incident details

• Classified government contract information

• Private investor board communications

• Employee background check results

Recommended Additional Due Diligence:

1. Conduct independent real-time sanctions and export control database screening through licensed compliance platforms 2. Obtain legal opinion on DoD supply chain risk designation impact on contract performance and flow-down obligations

3. Review cybersecurity assessment reports and penetration testing results given recent security incidents 4. Analyze copyright litigation exposure and potential financial impact through legal counsel review 5. Assess business continuity planning regarding federal agency restrictions and their expansion risk 6. Verify current government certifications and clearance status through direct agency inquiry 7. Verify sanctions status through direct OFAC/BIS database query

This report is valid as of the report date. Circumstances may change. Periodic re-screening is recommended based on risk indicator and relationship type.

DISCLAIMER

FirstCheck.App is a first-level third party risk assessment tool. It is not a substitute for formal investigation, professional review, or expert compliance determinations. Report findings should be evaluated by business managers, subject matter experts, and professionals in the context of the organization's risk tolerance, policies, directives, and approaches. FirstCheck.App reports may be retained as part of the organization's third-party risk management program, including its applicable record-keeping practices.

© 2026 FirstCheck.App. All rights reserved.


APPENDIX A — SANCTIONS & CONTROLS DATABASES SCREENED

This report reflects research conducted across the following databases. Individual databases are identified in Section 4 only when a match or potential match is found.

TIER 1 — Direct Web Research (Conducted on Every Report)

1.OFACSpecially Designated Nationals (SDN) List
2.OFACNon-SDN Lists (SSI, FSE, NS-MBS, PLC, and related)
3.BISEntity List
4.BISDenied Persons List
5.BISUnverified List
6.U.S. State DepartmentDebarred Parties List (ITAR)
7.OIGList of Excluded Individuals/Entities (LEIE)
8.GSA SAM.govSystem for Award Management Exclusions
9.DEAControlled Substances Act Exclusions
10.CMSState Medicaid Exclusion Lists (composite)
11.FDADebarment List
12.SECEnforcement Actions Database
13.CFTCEnforcement Actions
14.FinCENEnforcement Actions
15.FBIMost Wanted
16.InterpolRed Notices
17.UN Security CouncilConsolidated Sanctions List
18.European UnionConsolidated Sanctions List
19.UK HM TreasurySanctions List
20.World BankDebarment List
21.Asian Development BankSanctions List
22.OpenSanctionsConsolidated Database

TIER 2 — Web Research Based (Conducted Where Relevant)

1.FATFGrey List (Jurisdictions Under Increased Monitoring)
2.FATFBlack List (High-Risk Jurisdictions — Call for Action)
3.SECOSanctions List (Switzerland)
4.MASSanctions List (Singapore)
5.DFATSanctions List (Australia)
6.Global Affairs CanadaSanctions List
7.Japan METI/MOFASanctions and Export Control Lists
8.France TRESORDirection Générale du Trésor Sanctions
9.Germany BAFAExport Control and Sanctions Lists
10.UAESanctions List
11.IsraelSanctions List
12.ICIJOffshore Leaks Database (Panama Papers, Pandora Papers)
13.Transparency InternationalCorruption Perceptions Index (CPI)
14.Basel InstituteAML Index
15.ACAMSWatchlist (open-source tier)
16.South Korea MOFATSanctions List
17.Inter-American Development BankSanctions List

Tier 1 databases are researched on every report. Tier 2 databases are researched based on entity jurisdiction, industry, and risk profile. This screening is conducted through open-source web research and does not constitute direct real-time database queries. Independent verification against all applicable databases is required before entering into any business relationship or transaction.


APPENDIX B — RISK RATING METHODOLOGY

Risk ratings reflect a qualitative assessment of the severity, recency, and regulatory relevance of identified issues.

Red — Critical Risk
Confirmed regulatory enforcement, sanctions violations, or systemic control failures with material impact requiring immediate attention or enhanced approval.
Orange — Significant Concerns
Significant regulatory, legal, or reputational issues requiring enhanced due diligence, ongoing monitoring, or senior management approval before proceeding.
Yellow — Minor Issues
Historical concerns now resolved, manageable risks, or areas requiring periodic monitoring but not blocking engagement.
Green — No Adverse Findings
No material adverse findings identified in available open-source information. Standard onboarding procedures apply.
Insufficient Data
Limited publicly available information to assess risk. Additional research or direct inquiry recommended.
Overall Risk Rating
The overall risk rating reflects the highest relevant risk factor identified rather than an average score. A single critical finding elevates the overall rating regardless of positive findings in other areas.

REPORT METADATA
Report ID:FC-20260414-170615
Date Generated:April 14, 2026
FirstCheck Version:v2.12.7
Entity Analyzed:Anthropic
Jurisdiction:US
Relationship Type:Integrated Supplier / Subcontractor
Client Industry:Technology
Reason for Inquiry:New Entity Check

This report is valid as of the date generated. Circumstances may change. Periodic re-screening is recommended based on risk indicator and relationship type.


THIRD-PARTY RISK REVIEW FORM
Reviewer Assessment
Entity Reviewed: Anthropic    Report ID: FC-20260414-170615    Report Date: April 14, 2026
Risk Decision (Select one)
Acceptable: Okay to proceed.
Caution: Monitoring and oversight recommended.
Pending: Verify and resolve before proceeding.
Full Review: Conduct full background due diligence before proceeding.
Unacceptable: Do not proceed.
Other/Comment:
Recommended Frequency of Third Party Risk Assessment Reports
Every month
Every three months
Every six months
Annually
Other:
Additional Reviewer Comments
Reviewer Certification

The undersigned has reviewed this Third Party Risk Assessment Report and confirms that the risk decision and recommendations above are based on the information provided and professional judgment.

Signature
Date
Reviewer Name
Title

This form should be completed by the designated compliance reviewer and retained with the FirstCheck report as part of the organization's due diligence records.