Skip to main content

Examples

Batch Processing

const texts = ["text1", "text2", "text3"];
const results = await client.maskBatch(texts);

Usage Tracking

const result = await client.mask(text);
console.log(`Tokens used: ${result.usage.total_tokens}`);

Chat with History

const response = await client.safeChat(
"What did I tell you about my contact info?",
"openai:gpt-4o",
{
masking: true,
chat_history: [
{ role: "user", content: "My email is john@example.com" },
{ role: "assistant", content: "I understand. How can I help?" },
],
},
);

Best Practices

  1. Always handle errors - Use try-catch blocks
  2. Adjust confidence thresholds - Lower (0.3) = more sensitive, Higher (0.8) = more strict
  3. Use batch operations - Use maskBatch() for multiple texts
  4. Monitor token usage - Track usage.total_tokens for cost management
  5. Leverage TypeScript - Use type definitions for better DX

TypeScript Support

Full TypeScript definitions are included:

import type {
MaskResponse,
MaskPreviewResponse,
UnmaskResponse,
ChatCompletionResponse,
PiiEntity,
PiiMapItem,
} from "@anotiai/pii-masker";