API Reference
Constructor
new AnotiaiPIIMasker(config: ClientConfig | string)
interface ClientConfig {
apiKey?: string; // AnotiAI API key
baseUrl?: string; // API base URL (default: "https://anoti-backend-three.onrender.com")
timeout?: number; // Request timeout in ms (default: 60000)
retryAttempts?: number; // Retry attempts (default: 3)
retryDelay?: number; // Initial retry delay in ms (default: 1000)
maxRetryDelay?: number; // Max retry delay in ms (default: 10000)
}
Example:
// Simple - just pass API key
const client = new AnotiaiPIIMasker("your_api_key");
// With config object
const client = new AnotiaiPIIMasker({
apiKey: "your_api_key",
timeout: 120000,
});
Methods
detect(text: string, confidenceThreshold?: number): Promise<MaskPreviewResponse>
Detect PII without masking.
Parameters:
text(string): Text to analyzeconfidenceThreshold(number, optional): 0.0-1.0, higher = more strict (default: 0.5)
Returns:
{
entities_found: number;
pii_results: Array<{
value: string;
type: string;
start: number;
end: number;
confidence: number;
}>;
classification: string;
confidence: number;
usage: {
input_tokens: number;
output_tokens: number;
total_tokens: number;
}
}
Example:
const result = await client.detect("Contact me at john@example.com");
result.pii_results.forEach((entity) => {
console.log(
`${entity.type}: ${entity.value} (confidence: ${entity.confidence})`,
);
});
mask(text: string, confidenceThreshold?: number): Promise<MaskResponse>
Mask PII in text.
Returns:
{
masked_text: string;
pii_map: Record<
string,
{
value: string;
label: string;
confidence: number;
placeholder: string;
}
>;
entities_found: number;
confidence_threshold: number;
usage: {
input_tokens: number;
output_tokens: number;
total_tokens: number;
}
}
Example:
const result = await client.mask('Hi, I'm John Doe and my email is john@example.com');
console.log(result.masked_text);
// "Hi, I'm [REDACTED_NAME_1] and my email is [REDACTED_EMAIL_1]"
maskBatch(texts: string[], confidenceThreshold?: number): Promise<MaskResponse[]>
Mask PII in multiple texts (batch operation).
Example:
const texts = [
"Contact me at john@example.com",
"My phone is 555-1234",
"SSN: 123-45-6789",
];
const results = await client.maskBatch(texts, 0.7);
results.forEach((result, i) => {
console.log(`Text ${i + 1}: ${result.masked_text}`);
});
unmask(maskedText: string, piiMap: Record<string, PiiMapItem>): Promise<UnmaskResponse>
Restore original text from masked version.
Returns:
{
unmasked_text: string;
entities_restored: number;
usage: {
input_tokens: number;
output_tokens: number;
total_tokens: number;
}
}
Example:
const maskResult = await client.mask("My email is john@example.com");
const unmaskResult = await client.unmask(
maskResult.masked_text,
maskResult.pii_map,
);
console.log(unmaskResult.unmasked_text);
safeChat(prompt: string, modelId: string, options?: SafeChatOptions): Promise<ChatCompletionResponse | Readable>
Chat with LLM while automatically masking PII.
Parameters:
prompt(string): User promptmodelId(string): Model ID (e.g.,'openai:gpt-4o','openai:gpt-4-turbo')options.masking(boolean, optional): Enable server-side masking (default: true)options.chat_history(ChatMessage[], optional): Chat historyoptions.stream(boolean, optional): Enable streaming (default: false)options.allowUnmaskedFallback(boolean, optional): Allow unmasked if masking unavailable (default: false)
Returns (non-streaming):
{
id: string;
choices: [{
message: {
role: 'assistant';
content: string;
};
}];
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
Returns (streaming):
Node.js Readable stream (SSE format)
Example (non-streaming):
const response = await client.safeChat(
"My email is john@example.com. Can you help?",
"openai:gpt-4o",
{ masking: true },
);
console.log(response.choices[0].message.content);
Example (streaming):
const stream = await client.safeChat("Tell me a story", "openai:gpt-4o", {
stream: true,
masking: false,
allowUnmaskedFallback: true,
});
stream.on("data", (chunk: Buffer) => {
const text = chunk.toString();
const lines = text.split("\n");
for (const line of lines) {
if (line.startsWith("data: ")) {
const data = JSON.parse(line.slice(6));
if (data.choices?.[0]?.delta?.content) {
process.stdout.write(data.choices[0].delta.content);
}
}
}
});
Error Handling
import {
AnotiaiAPIError,
AuthenticationError,
RateLimitError,
ValidationError,
NetworkError,
} from "@anotiai/pii-masker";
try {
const result = await client.mask(text);
} catch (error) {
if (error instanceof AuthenticationError) {
console.error("Invalid API key");
} else if (error instanceof RateLimitError) {
console.error(`Rate limited. Retry after ${error.retryAfter} seconds`);
} else if (error instanceof NetworkError) {
console.error("Network issue:", error.message);
} else {
console.error("Unexpected error:", error);
}
}
Configuration
Environment Variables
export ANOTIAI_API_KEY=your_api_key
export ANOTIAI_BASE_URL=https://custom-api.example.com
// Will automatically use ANOTIAI_API_KEY from environment
const client = new AnotiaiPIIMasker({});
Programmatic
const client = new AnotiaiPIIMasker({
apiKey: "your_key",
baseUrl: "https://anoti-backend-three.onrender.com",
timeout: 120000,
retryAttempts: 5,
});
Retry Logic
The SDK automatically retries failed requests:
- Retries on rate limits (429), server errors (5xx), and network errors
- Exponential backoff with jitter
- Respects
Retry-Afterheader - Configurable via constructor options
Support
- Email: tech@anotiai.com -js