Examples
Batch Processing
texts = ["text1", "text2", "text3"]
results = [guardian.mask_text(t) for t in texts]
Usage Tracking
result = guardian.mask_text(text)
print(f"Tokens used: {result['usage']['total_tokens']}")
Chat with History
response = guardian.safe_chat(
prompt="What did I tell you about my contact info?",
model_id="openai:gpt-4o",
chat_history=[
{"role": "user", "content": "My email is john@example.com"},
{"role": "assistant", "content": "I understand. How can I help?"}
]
)
Best Practices
- Always handle errors - Wrap API calls in try-except blocks
- Adjust confidence thresholds - Lower (0.3) = more sensitive, Higher (0.8) = more strict
- Monitor token usage - Track
usage.total_tokensfor cost management - Use local fallback - Enable
local_fallback=Truefor reliability