Skip to main content

Examples

Batch Processing

texts = ["text1", "text2", "text3"]
results = [guardian.mask_text(t) for t in texts]

Usage Tracking

result = guardian.mask_text(text)
print(f"Tokens used: {result['usage']['total_tokens']}")

Chat with History

response = guardian.safe_chat(
prompt="What did I tell you about my contact info?",
model_id="openai:gpt-4o",
chat_history=[
{"role": "user", "content": "My email is john@example.com"},
{"role": "assistant", "content": "I understand. How can I help?"}
]
)

Best Practices

  1. Always handle errors - Wrap API calls in try-except blocks
  2. Adjust confidence thresholds - Lower (0.3) = more sensitive, Higher (0.8) = more strict
  3. Monitor token usage - Track usage.total_tokens for cost management
  4. Use local fallback - Enable local_fallback=True for reliability