I am an enterprise technology attorney with 15+ years negotiating complex commercial deals, building privacy programs, and governing AI risk, in-house and at scale, alongside the sales and product teams responsible for the outcomes. This site is designed to give you a more complete picture of that experience than a resume can. Browse my domain expertise, use the AI-powered chat to ask me anything about my background, or interact with a governance workflow I built as a live demonstration of how I use frontier AI models in legal practice.
Click on the cards below to learn more about my experience and expertise in a variety of legal domains.
Using AI tools to help manage legal work is no longer a "neat trick," but an essential skill. Below is a live demonstration of a five-skill product counsel governance system I built using Claude. It contains a master router, pre-ship AI governance review, DPIA assessment, post-ship monitoring, and report assembly. The goal was to create a screening tool for an in-house product counsel to use to flag possible compliance risks with new product features, data uses, or AI model feedback (monitoring).
Annex III, Point 4(b): AI systems used to assist in decisions on promotion, compensation, task allocation, and monitoring of performance and behavior. Employment-related AI systems that influence compensation and promotion fall squarely within Annex III. The use of behavioral metadata as proxy performance indicators reinforces this classification.
GOVERN: No documented ownership or accountability structure is described. It is unclear who is responsible for the system's outputs, who has authority to override a score, and what escalation path exists when a score is disputed. This is a blocking gap.
MAP: Foreseeable risks include disparate impact on protected classes; proxy discrimination via metadata; opacity to employees; vendor dependency on a third-party LLM; and scope creep risk.
MEASURE: No testing, bias detection, or benchmark methodology is described. For a High-Risk system under the EU AI Act, this is a blocking gap.
MANAGE: No mitigation or incident response procedure is described. Required before ship: a human oversight protocol, a dispute/correction mechanism for employees, and a vendor incident notification clause.
Article 22 (Automated Decision-Making): If the system produces scores that managers use without meaningful independent review, this may constitute automated decision-making with legal or similarly significant effects, triggering Article 22 rights. This is the highest-priority legal question for EU/UK deployment.
Data Minimization: Email metadata and Slack message frequency are behavioral proxies. The proportionality argument for using communication volume as a performance indicator is weak without validation evidence.
Purpose Limitation: Email and Slack data were almost certainly collected for communication purposes, not performance evaluation. Repurposing for scoring requires either a compatibility assessment or fresh consent basis in the EU/UK.
Five of nine WP248 high-risk criteria are satisfied. DPIA is required when two or more are present.
| Risk | Score | Residual |
|---|---|---|
| Disparate impact via proxy metrics | Critical | Medium |
| Article 22 violation | Critical | Medium |
| Germany §87 BetrVG non-compliance | Critical | Low (if works council engaged) |
| Employee opacity/contestation failure | High | Medium |
| LLM vendor DPA missing | High | Low |
| Article 9 latent exposure | High | Medium |
If residual risk remains HIGH or CRITICAL after mitigation, prior consultation with the relevant supervisory authority (ICO for UK, Landesbeauftragter for Germany) is required under Article 36 GDPR before processing begins. On current facts, consultation may be required even with mitigations in place.
This system is a High-Risk AI system under the EU AI Act and triggers mandatory DPIA obligations under GDPR Article 35. Three CRITICAL-level risks are present: structural proxy discrimination through behavioral metadata, potential Article 22 automated decision-making violations, and a hard legal block on German deployment absent works council consent under §87 BetrVG.
The system cannot legally launch in Germany without works council approval, which must be obtained before deployment, not after. In the EU and UK, the Article 22 compliance posture — specifically whether manager review of AI-generated scores constitutes meaningful human oversight — is unresolved and must be designed into the product before launch.
Pre-deployment bias auditing is required both as a matter of EU AI Act compliance and as a practical defense against disparate impact claims. Four legal research questions are flagged as requiring external verification before this review can be finalized.
Select a pre-loaded scenario or describe your own. The router will determine which workflow applies and run a condensed analysis.
This demo runs a condensed version of the workflow. Full stack output includes detailed risk matrices, consolidated action items, and cross-workflow research flags.
Have a question about my experience, skills, or fit for a specific role? Ask below. This is powered by AI and trained on my actual background. Try it the way a recruiter or hiring manager would.
If you are looking for counsel with deep legal expertise and the operational fluency of someone who has managed hundreds of deals and can engage as a genuine business partner, I would welcome the conversation.