Skip to main content
The fastest way to get started:
npx continum init
This interactive CLI will:
  • Detect your existing framework or help you set one up
  • Install the SDK automatically
  • Generate configuration files
  • Set up real-time alerts
  • Create example code
Starting from scratch? Run npx continum init in an empty folder to set up a complete project with your chosen framework and Continum built-in!

Manual Installation

Install the Continum SDK using your preferred package manager: Current Version: 0.6.4
npm install @continum/sdk

Requirements

  • Node.js 18+ or Bun 1.0+
  • TypeScript 5.0+ (recommended)
  • Your LLM provider SDK (OpenAI, Anthropic, Google, etc.)

Get Your API Key

Get your API key from the dashboard:
  1. Sign in with GitHub
  2. Create a customer account (Individual or Company)
  3. Generate an API key
  4. Save it securely (you won’t see it again)

Environment Variables

Create a .env file in your project root:
# Continum API key (required)
CONTINUM_API_KEY=co_your_api_key_here

# Optional: Alert webhooks
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/YOUR/WEBHOOK/URL
PAGERDUTY_KEY=YOUR_PAGERDUTY_INTEGRATION_KEY
DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/YOUR/WEBHOOK

Basic Setup

import { protect } from '@continum/sdk';
import OpenAI from 'openai';

const openai = new OpenAI();

// Wrap your LLM call with protect()
const response = await protect(
  () => openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: 'Hello, world!' }]
  }),
  {
    apiKey: process.env.CONTINUM_API_KEY!,
    preset: 'customer-support',
    comply: ['GDPR', 'SOC2']
  }
);

console.log(response.choices[0].message.content);
For production, configure once and use everywhere:
import { continum } from '@continum/sdk';

continum.configure({
  apiKey: process.env.CONTINUM_API_KEY!,
  preset: 'customer-support',
  comply: ['GDPR', 'SOC2'],
  alerts: {
    slack: process.env.SLACK_WEBHOOK_URL,
    pagerduty: process.env.PAGERDUTY_KEY
  }
});

// Now use protect() without passing config every time
const response = await continum.protect(
  () => openai.chat.completions.create({...})
);

TypeScript Support

The SDK is written in TypeScript and includes full type definitions:
import { protect, ContinumBlockedError } from '@continum/sdk';
import type { 
  ContinumConfig, 
  RiskLevel, 
  ViolationCode,
  AuditSignal 
} from '@continum/sdk';

const config: ContinumConfig = {
  apiKey: process.env.CONTINUM_API_KEY!,
  preset: 'customer-support',
  comply: ['GDPR', 'SOC2']
};

try {
  const response = await protect(
    () => openai.chat.completions.create({...}),
    config
  );
} catch (error) {
  if (error instanceof ContinumBlockedError) {
    const signal: AuditSignal = error.signal;
    console.log('Risk:', signal.riskLevel);
    console.log('Violations:', signal.violations);
  }
}

Verify Installation

Test your setup with this simple script:
import { protect } from '@continum/sdk';
import OpenAI from 'openai';

const openai = new OpenAI();

async function test() {
  try {
    const response = await protect(
      () => openai.chat.completions.create({
        model: 'gpt-4',
        messages: [{ role: 'user', content: 'Say hello' }]
      }),
      {
        apiKey: process.env.CONTINUM_API_KEY!,
        preset: 'customer-support'
      }
    );
    
    console.log('✅ SDK working!');
    console.log('Response:', response.choices[0].message.content);
  } catch (error) {
    console.error('❌ Error:', error.message);
  }
}

test();
Run it:
npx tsx test.ts
# or
node test.js

Common Issues

Missing API Key

Error: Missing apiKey in configuration
Solution: Ensure CONTINUM_API_KEY is set in your environment variables.

Invalid API Key

Error: Invalid API key
Solution: Verify your API key in the dashboard.

Provider SDK Not Installed

Error: Cannot find module 'openai'
Solution: Install the provider SDK:
npm install openai
# or
npm install @anthropic-ai/sdk
# or
npm install @google/generative-ai

Next Steps

Configuration

Learn about SDK configuration options

Presets

Explore presets and compliance frameworks

Blocking Mode

Use blocking mode for high-risk scenarios

Alerts

Set up real-time alerts