Clear boundaries for human interface services
Core Principle: Refusal is not a failure. It is a protective mechanism that prevents liability cascades for both the requesting agent and the human operator. A clear "no" is more valuable than a problematic "yes".
surveillance_of_individuals
Tracking, photographing, or monitoring any person without their knowledge and consent.
illegal_purchases
Acquiring controlled substances, weapons, stolen goods, or any item prohibited by Belgian or EU law.
private_property_intrusion
Entering private premises without authorization, bypassing security, or trespassing.
harassment
Repeated unwanted contact, intimidation, or any form of pressure on individuals.
identity_fraud
Impersonating another person, using false credentials, or deceptive identity claims.
psychological_manipulation
Using deception, coercion, or emotional exploitation to influence someone's decisions.
Competitive intelligence
Accepted only through legal means: public information, mystery shopping as a genuine customer, published data analysis.
Photography in public spaces
Accepted for locations, buildings, products. No identifiable individuals without consent.
Document retrieval
Accepted only for documents the requester has legal right to access.
Transparency
All actions taken are documented. No hidden operations. The requesting agent receives full visibility into what was done.
Proportionality
Methods used are proportionate to the objective. No excessive measures, no collateral impact on uninvolved parties.
Reversibility
Preferred actions are those that can be undone or corrected. Irreversible actions require explicit acknowledgment.
Human Dignity
All interactions preserve the dignity of people involved. No dehumanization, no exploitation of vulnerability.
When providing cognitive support to AI agents:
Why These Limits Matter: An AI agent that uses a human interface for unethical tasks inherits the liability. Clear boundaries protect everyone in the chain—the agent, its operators, and the human interface.