Law & AI at UC Law SF: Governance, Power and the New Liability Map
Law & AI at UC Law SF: Governance, Power and the New Liability Map
Last week I had the privilege of co-organising and speaking at UC Law SF’s Law & AI Certificate program, also wearing my hat as Chief Legal Officer of Scrydon. Five days of sessions, case studies and live litigation confirmed what many of us have sensed for a while: the central questions of AI governance are no longer merely technical and policy driven — they are legal, contractual and fiduciary. While the course is mainly directed at US legal professionals, the learnings and litigation signals travel well beyond US borders and are directly relevant to anyone navigating AI governance, compliance or deal-making in Europe and beyond.
1. From “human in the loop” to autonomous agents and liability
We are moving from systems that answer prompts to agents that understand context, call tools and take real world actions — including moving money and touching critical infrastructure. That immediately raises questions about how far concepts like agency, duty of care and product liability really stretch when the “agent” has no intent, no professional licence and operates at machine speed. The liability spectrum we discussed — from strict vicarious liability to negligence and safe harbour regimes — is ultimately a policy choice about who in the value chain (developer, deployer, customer) will end up holding the risk.
The torts and product liability materials used hypotheticals like CareBot, AutoHaul and JudgeAssist to test different analogies: AI as product, animal, child, or human-like agent. One key takeaway was that a strict respondeat superior model can force principals to internalise harms and optimise activity levels, whereas a pure negligence model risks under incentivising investment in guardrails and insurance. Case studies from Munich Re’s aiSure, Vouch and others showed how bespoke AI coverage and risk-based pricing are likely to become part of the effective “regulatory stack” for high-risk uses.
At the same time, frontier scaling work and live incidents show what “autonomous” now means for security and compliance. The scaling laws session walked through GTG 1002, a 2025 case where a Chinese state-sponsored actor used a Claude-based agentic stack across the full MITRE ATT&CK chain to conduct largely automated intrusions against more than 30 organisations worldwide. Roughly 80–90% of the campaign was automated, with humans only in a few decision points, and the framework decomposed the attack into thousands of “innocent looking” subtasks that the model executed without understanding their combined effect. From an in-house perspective, that turns “assume breach” into operational doctrine: we need to decide whether agents are treated as users for identity and logging, how we record who instructed what, and how our controls, monitoring and incident response plans assume that agents, not just humans, will probe and exploit our systems.
The session on “Risky Agents Without Intentions” (the title borrowed from 2024 an article "The Law of AI is the Law of Risky Agents without Intentions" by Ian Ayres and Jack B Balkin), provided a liability blueprint for exactly these kinds of systems. It suggests that in a world of agentic AI, the proxy for “intent” will be what the company knew or should have known about risks, how it documented design choices and mitigations, and how it structured human in the loop, evaluation, red teaming, model cards and terms of use. Different regimes — strict liability for high risk uses, negligence, intermediary immunity, information fiduciary duties and even “law following AI” design — are ultimately choices about how the cost of risk is distributed between developers, deployers, users and insurers.
2. Governance and power: politics, regulation, boards and term sheets
On the public law side, US and EU trajectories are diverging in ways that directly affect how we structure governance at Scrydon and similar companies. In the United States, stalled federal efforts mean California and a handful of other states are setting the practical baseline on provenance and watermarking, deepfakes, automated decision making and catastrophic risk, creating a patchwork that already functions as a de facto standard for national players. In the EU, the AI Act flips the familiar GDPR logic by placing providers in the lead and pushing deployers into a follower role, with a highly granular compliance architecture that lands very differently on a startup balance sheet than on a hyperscaler’s.
The Chinese AI Governance & Export readings added a sharper geopolitical lens: they frame AI safety not as a purely technical discipline but as one of the main theatres of strategic competition, particularly between the US and China. The analysis highlights how questions about frontier risk, export controls and surveillance are being pulled into a broader contest over industrial policy and ideology, and how China’s domestic governance model — combining tight deployment controls with an assertive export posture — is shaping the global conversation on “responsible AI” as much as any multilateral forum.
The AI & Politics made this even more concrete by mapping ideological camps onto two axes: trust vs. distrust of big tech and “AI is mostly hype” vs. “AI will be as big a deal as fire”. Capitalists/accelerationists and AI hawks cluster in the “trust + big deal” quadrant, while AI safety and “holy war” groups see AI as transformative but deeply distrust large firms; AI ethics, unions and skeptics sit closer to “mostly hype” and distrust. These maps are then overlaid onto specific fights — moratoria, “TAKE IT DOWN” coalitions, export controls, and even Anthropic’s supply chain risk designation — showing that future coalitions will cut across traditional left/right lines, especially on questions of corporate power, labour and openness.
Equally important, the real power plays are happening in private law. In the AI in boards session, secondary option markets, inter company deals and shareholders’ agreements were described as new governance instruments: hard wiring vetoes on certain use cases, mandating safety reviews, structuring information rights around model evolution, or linking board seats to specific risk governance commitments. Build versus buy choices — whether to hire key teams, license models or acquire companies — are no longer just technology strategy; they are decisions about where control, fiduciary responsibility and regulatory exposure sit inside the group.
3. Compliance from the inside: inventories, frameworks and friction
The “AI Compliance in Practice” in house panel drilled into what it takes to operationalise all of this. With privacy and product counsel from companies such as OpenAI, Datadog, Acxiom and Microsoft, the session focused first on scoping: agreeing on what “AI” means for the organisation, mapping stakeholders, and identifying friction points between risk, product and sales. A second layer was the framework landscape: NIST AI RMF, OECD principles, ISO/IEC 42001/42005 and 23894, board level guidance like ISO/IEC 38507, and concrete regulatory regimes such as the EU AI Act, the US Executive Order and state laws in Colorado, Illinois, California and Utah. Operationally, the emphasis was on inventories, vendor and third party management, and embedding reviews into the product lifecycle. Concrete examples included notetaker tools (security controls, consent, retention and filtering), marketing uses (clean inputs, preferred vendors, approvals and IP hygiene) and legal department tools (productivity gains balanced against confidentiality and privilege). A recurring theme was the need to build frameworks robust enough to survive rapid regulatory change without paralysing product teams — a challenge very familiar to any in house function watching AI law evolve month by month. A complementary “AI In House” panel brought together CLOs and senior counsel from AI-heavy organisations, with experience spanning Instagram, Neuralink, Meta, Anthropic and Liquid AI. Their discussions underscored how copyright, open source licensing, privacy and safety now intersect in everyday product counselling, and how in-house teams are being forced to become AI literate very quickly to maintain credibility with both engineers and regulators. The “Lawyers as Context Masters” session reframed what all of this means for individual lawyers. As the cost of producing lawyer-like text collapses but the cost of evaluating it does not, the value of the lawyer shifts to “context engineering”: drawing corpus boundaries, structuring prompts, defining retrieval logic and reconstructing legal hierarchies that models flatten (binding vs. persuasive, primary vs. secondary, jurisdictionally relevant vs. noise). The growing database of AI-related sanctions is less about “bad tech” than about poor epistemic hygiene — lawyers not understanding that hallucinations are often exactly what you get when input design and hierarchy are wrong.
4. The web, crawlers and content licensing
Several sessions looked at how AI is transforming the web’s infrastructure and economics. A substantial share of web traffic already comes from bots, a proportion expected to grow as AI crawlers and agents proliferate, making Retrieval Augmented Generation (RAG) central to system design and pushing content delivery networks (CDNs) into the role of critical control points. Open platforms such as Stack Overflow and Wikipedia are caught between their original human-to-human model and heavy AI crawler traffic that strains infrastructure while training systems that can substitute for their communities.
The scraping and crawlers tutorial described how AI crawlers have accelerated the “toxic web” trend: human traffic falls while AI traffic can spike 10–20x, with bots ignoring robots.txt, masquerading as users and effectively turning training runs into distributed denial of service events that the “digital commons” subsidises. In response, CDNs such as Cloudflare are emerging as de facto gatekeepers, experimenting with pay-per-crawl models and standards like x402 or “Really Simple Licensing”, while agent-aware protocols like MCP and tools such as NLWeb offer a more auditable, API-based path for AI access to site content.
The “Content, Access & Control” panel extended this into a full lifecycle view of licensing. Upstream, it distinguishes the right to use from the right to exclude, and walks through licensing of training data — from public web material and structured datasets to user content and synthetic data — against the backdrop of EU and US research rights, DSA VLOP obligations, robots.txt and emerging AI specific opt outs. Mid stream, it looks at post training issues around model weights, inference, RAG and agentic access, including questions about removal, machine unlearning and the feasibility of post hoc deletions for open models. Downstream, it asks who (if anyone) can claim rights in human, AI and co generated outputs, and whether existing IP frameworks are the right tool for style replication, persona based constraints and cross border licensing.
5. Copyright, markets and the cost of training
The current U.S. position is that purely AI‑generated material, or material with "insufficient human control" over expressive elements, is not protected, and that registrations must disclaim AI‑generated components. Three cases trace that principle: the Copyright Office's partial registration of Kashtanova's Zarya of the Dawn (protecting only the human‑authored text and arrangement), the Review Board's refusal to register Jason Allen's Théâtre D'Opéra Spatial, and — now settled at the highest level — the Supreme Court's March 2026 refusal to hear Thaler's appeal in A Recent Entrance to Paradise, confirming that the human‑authorship requirement is firmly established US law.
On training, more than 80 US cases (and over 100 globally) are already active, largely framed through the four factor fair use test. Bartz v. Anthropic and Kadrey v. Meta, both book class actions, show the two main lines of argument: whether downloading and training are one integrated transformative use or two distinct uses, and how to treat “market dilution” — the idea that even non identical outputs can harm the market for works similar to those in the training set.
The US Copyright Office’s 2025 AI report squarely endorses market dilution concerns, and Bartz’s treatment of a pirate “central library” for training data illustrates how statutory damages (750–150,000 USD per work) can create existential exposure when hundreds of thousands of works are involved. Output side infringement raises a different set of questions: when does style mimicry or “regurgitation” of training data cross into infringement, and who is the direct infringer — the end user, the platform, or the model provider? The session noted explicit regurgitation allegations in some music and lyric cases (including the Anthropic music publisher litigation), and separate pure copyright analysis from scraping and lawful access issues such as anti-circumvention and contractual limits surfaced in disputes involving Reddit and others. Internationally, they contrast the EU’s TDM exceptions and AI Act driven transparency and copyright compliance duties with more permissive training exceptions in Japan and Singapore, and with China’s focus on deployment, voice rights and platform responsibility, underscoring that copyright is currently the dominant legal lens through which AI is being contested.
6. Courts, evidence, privilege and ethics
On the litigation side, Rteh session “Artificial Intelligence in the Courts” showed how quickly evidence and privilege doctrine are having to adjust. A proposed Federal Rule of Evidence 707 would import Rule 702’s reliability standards into machine-generated evidence offered without an expert, but its permissive drafting has raised concerns that it could make it too easy to admit opaque “machine opinions”. Deepfake-driven authentication problems, privilege rulings such as United States v. Heppner (no attorney-client privilege for chats with an AI) and Warner v. Gilbarco (work product protection for a pro se litigant’s AI use), and emerging prompt preservation duties in standing orders all point in the same direction: courts will treat AI outputs as evidence and ESI, but they will demand reliability, auditability and clear human accountability.
The legal ethics session was a useful reminder that, for lawyers, none of this happens in a vacuum. The “You’re the lawyer, not the AI” framing walked through how existing rules of professional conduct apply directly to AI use: competence (1.1) now includes understanding when and how to safely use AI tools; diligence (1.3) and candour to the tribunal (3.3) are squarely implicated by hallucinated cases; and confidentiality (1.6) requires careful decisions about which client data, if any, is sent to external services. Supervisory rules (5.1, 5.3) and independence constraints (5.4, 5.5) limit how far AI can be embedded in pricing and service models, and real world examples of sanctions for fabricated citations, privilege issues for consumer AI use, and internal drama inside AI companies themselves underline that bar regulators will reach for existing tools before waiting for bespoke “AI ethics” rules.
7. Why this matters for in house teams
Across all of these discussions, I was struck by how quickly the centre of gravity is moving:
- from abstract “AI ethcis” to very concrete allocations of liability in tort, contract, insurance and professional licensing regimes;
- from public law debates to private law instruments — shareholders’ agreements, licensing terms, employment and governance structures — that quietly decide who really controls models, data and deployment;
- from speculative risk scenarios to incidents and lawsuits that will set defaults for how AI can be built, deployed and contested.
For legal departments supporting agentic AI, the bar is moving faster than most internal frameworks can track. A policy document is a starting point, not a governance strategy; a compliance checklist is the beginning of risk management, not the end of it. The liability frameworks are being shaped in real litigation right now, and the governance expectations are increasingly being set in board resolutions, vendor contracts and licensing terms rather than in regulation. Technology evolves fast — but the customers who start asking the right questions about how the AI products are architected, what controls are built in at the model and agent level, and where accountability actually sits in the stack will be far better placed than those who bolt governance on afterwards.
If you are based in Europe and wondering whether a US-focused programme is worth the trip — in my experience, yes. The Law & AI Certificate at UC Law SF offers a ground-level view of how AI law is being made in real time, and that perspective travels well. Feel free to reach out if you want to know more.