Documentation Index
Fetch the complete documentation index at: https://nango.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
Nango provides a comprehensive testing framework that combines dry runs with mock-based snapshot testing. This guide will walk you through testing your integrations effectively.
Testing approach
Nango’s testing framework is built on three core concepts:
- Dry runs: Test your integrations against live API connections without affecting production data
- Mocks: Save API responses during dry runs to create reproducible test fixtures
- Snapshot testing: Automatically compare integration outputs against saved snapshots using Vitest
This approach ensures your integrations work correctly while providing fast, reliable tests that don’t depend on external APIs.
Dry run testing
Dry runs allow you to execute your sync functions and action functions against real API connections without saving data to your database. This is essential for:
- Testing integration logic before deployment
- Debugging issues with live data
- Generating test fixtures (mocks)
- Validating data transformations
Basic dry run
Execute a sync or action against an existing connection:
nango dryrun <sync-or-action-name> <connection-id>
Example:
nango dryrun fetch-tickets abc-123-connection
Dry run options
Common options for testing:
# Specify environment (dev or prod)
nango dryrun fetch-tickets abc-123 -e prod
# For action functions: pass input data
nango dryrun create-ticket abc-123 --input '{"title": "Test ticket"}'
# For sync functions: specify an initial checkpoint
nango dryrun fetch-tickets abc-123 --checkpoint '{ "lastModifiedAt": "2026-02-01T01:00:00.000Z" }'
# Use a specific integration when sync names overlap
nango dryrun fetch-tickets abc-123 --integration-id github
# Execute a specific variant
nango dryrun fetch-tickets abc-123 --variant premium
Data validation
Validation ensures your integration inputs and outputs match your defined schemas. This catches data transformation errors early.
Enable validation during dry runs
Use the --validate flag to enforce validation:
nango dryrun fetch-tickets abc-123 --validate
When validation is enabled:
- Action inputs are validated before execution
- Action outputs are validated after execution
- Sync records are validated before they would be saved
- Validation failures halt execution and display detailed error messages
Validation with Zod
You can also validate data directly in your integration code using Zod:
import { z } from 'zod';
const TicketSchema = z.object({
id: z.string(),
title: z.string(),
status: z.enum(['open', 'closed']),
createdAt: z.string().datetime(),
});
export default createSync({
exec: async (nango) => {
const response = await nango.get({ endpoint: '/tickets' });
// Validate the API response
const tickets = response.data.map(ticket => {
return TicketSchema.parse(ticket); // Throws if validation fails
});
await nango.batchSave(tickets, 'Ticket');
},
});
Validation error output
When validation fails, you’ll see detailed error information:
Invalid sync record. Use `--validate` option to see the details (invalid_sync_record)
{
"validation": [
{
"path": "createdAt",
"message": "Invalid datetime string! Must be UTC.",
"code": "invalid_string"
}
],
"model": "Ticket"
}
Saving mocks for tests
Mocks are saved API responses that allow you to run tests without hitting external APIs. This makes tests faster and more reliable.
Generate mocks with dry run
Use the --save flag to save all API responses:
nango dryrun fetch-tickets abc-123 --save
Important: When using --save, validation is automatically enabled. Mocks are only saved if validation passes, ensuring your test fixtures contain valid data.
Mock file structure
Mocks are saved in a single .test.json file that can be used alongside your test file:
github/
├── tests/
│ ├── fetch-tickets.test.ts
│ └── fetch-tickets.test.json
The test.json file contains all the necessary mock data for a given test:
{
"input": { "title": "Test ticket" },
"output": { "id": "TKT-123", "status": "created" },
"nango": {
"getConnection": { "connectionId": "abc-123", "provider": "github" },
"getMetadata": { "accountId": "test-123" },
"batchSave": {
"Ticket": [{ "id": "TKT-123", "title": "Test ticket" }]
},
"batchDelete": {
"Ticket": [{ "_nango_id": "del-456" }]
}
},
"api": {
"GET": {
"/tickets": {
"response": [{ "id": "TKT-123", "title": "Test ticket" }]
}
},
"POST": {
"/issues": {
"response": { "id": "ISS-789" }
}
}
}
}
For sync functions that rely on connection metadata, you can provide test metadata:
nango dryrun fetch-tickets abc-123 --save --metadata '{"accountId": "test-123"}'
Or load from a file:
nango dryrun fetch-tickets abc-123 --save --metadata @fixtures/metadata.json
Testing with Vitest
Nango uses Vitest as its testing framework. Vitest is fast, has a great developer experience, and provides snapshot testing out of the box.
Setup
Install Vitest as a dev dependency:
Generate tests for your integrations:
You can also generate tests for specific integrations, sync functions, or action functions:
# Generate tests for a specific integration
nango generate:tests -i github
# Generate tests for a specific sync function
nango generate:tests -s fetch-tickets
# Generate tests for a specific action function
nango generate:tests -a create-ticket
# Combine flags for more specific targeting
nango generate:tests -i github -s fetch-tickets
Run your tests:
Auto-generated tests
When you run nango generate:tests, Nango creates test files for all your integrations:
Sync test example:
import { vi, expect, it, describe, beforeAll } from 'vitest';
import createSync from '../syncs/fetch-tickets.js';
describe('github fetch-tickets tests', () => {
let nangoMock;
beforeAll(async () => {
nangoMock = new global.vitest.NangoSyncMock({
dirname: __dirname,
name: "fetch-tickets",
Model: "Ticket"
});
});
const models = 'Ticket'.split(',');
const batchSaveSpy = vi.spyOn(nangoMock, 'batchSave');
it('should get, map correctly the data and batchSave the result', async () => {
await createSync.exec(nangoMock);
for (const model of models) {
const expectedBatchSaveData = await nangoMock.getBatchSaveData(model);
const spiedData = batchSaveSpy.mock.calls.flatMap(call => {
if (call[1] === model) {
return call[0];
}
return [];
});
const spied = JSON.parse(JSON.stringify(spiedData));
expect(spied).toStrictEqual(expectedBatchSaveData);
}
});
it('should get, map correctly the data and batchDelete the result', async () => {
await createSync.exec(nangoMock);
for (const model of models) {
const batchDeleteData = await nangoMock.getBatchDeleteData(model);
if (batchDeleteData && batchDeleteData.length > 0) {
expect(nangoMock.batchDelete).toHaveBeenCalledWith(batchDeleteData, model);
}
}
});
});
Action test example:
import { vi, expect, it, describe, beforeAll } from 'vitest';
import createAction from '../actions/create-ticket.js';
describe('github create-ticket tests', () => {
let nangoMock;
beforeAll(async () => {
nangoMock = new global.vitest.NangoActionMock({
dirname: __dirname,
name: "create-ticket",
Model: "Ticket"
});
});
it('should output the action output that is expected', async () => {
const input = await nangoMock.getInput();
const response = await createAction.exec(nangoMock, input);
const output = await nangoMock.getOutput();
expect(response).toEqual(output);
});
});
How mocks work in tests
The NangoSyncMock and NangoActionMock classes automatically load your saved mocks from the .test.json file:
- API requests are intercepted and return saved mock responses from the
api section.
- Input data is loaded from the
input property for action functions.
- Expected outputs are loaded from the
output property.
- Tests compare actual outputs against expected outputs.
This means:
- Tests run instantly (no API calls)
- Tests are deterministic (same input = same output)
- Tests work offline
If you have tests using the old multi-file mock format, you can automatically migrate them to the new unified format.
Set the MIGRATE_MOCKS environment variable to 2026-01 and run your tests:
MIGRATE_MOCKS=2026-01 npm test
This will:
- Run your tests using the old mock files.
- Intercept all mock data accessed during the test run.
- Save the data into a new
.test.json file.
- The old mock directory (
mocks and fixtures) can then be safely deleted once a clean green test run is achieved.
Note: This migration tool works by intercepting the mock data loaded by your existing tests. It requires that your tests are using the standard Nango mock utilities (NangoSyncMock or NangoActionMock) imported from nango/test.
Pagination Bug in Legacy Test UtilitiesThe legacy test utilities had a bug where pagination would sometimes stop after the first page. This means:
- Tests appeared to pass but were only testing the first page of results
- Mock files may exist for subsequent pages, but might be incorrect (hashes or params were never checked)
- Some mock files may be missing entirely if they were never recorded
How the migration handles this:The migration tool performs a best-effort recovery:
- It first tries to match mock files by their exact request hash
- If that fails, it scans all available mock files and matches by comparing the actual request parameters (endpoint, query params, headers)
- This handles cases where the hash in the filename is wrong but the request data inside the file is correct
What to do if tests fail after migration:If your tests fail after running the migration, it means one of:
- The mock file for a paginated request doesn’t exist at all (it was never recorded due to the bug)
- The request parameters stored in the mock file don’t match what Nango’s pagination now sends
To fix failing tests, re-record your mocks:nango dryrun <sync-name> <connection-id> --save
This will generate a complete .test.json file with all paginated responses using Nango’s actual pagination implementation.
Running tests
# Run all tests
npm test
# Run tests in watch mode
npm test -- --watch
# Run tests for a specific integration
npm test github
# Run a specific test file
npm test github-fetch-tickets.test.ts
# Run with coverage
npm test -- --coverage
Test configuration
Vitest is configured via vite.config.ts in your project root:
import { defineConfig } from 'vite';
export default defineConfig({
test: {
globals: true,
environment: 'node',
setupFiles: ['./vitest.setup.ts'],
},
});
The vitest.setup.ts file makes Nango mocks available globally:
import { NangoActionMock, NangoSyncMock } from "nango/test";
globalThis.vitest = {
NangoActionMock,
NangoSyncMock,
};
Customizing tests with business logic
While auto-generated tests validate basic data flow, you often need custom tests for business logic.
Adding custom assertions
Extend the generated tests with additional assertions:
import { vi, expect, it, describe, beforeAll } from 'vitest';
import createSync from '../syncs/fetch-tickets.js';
describe('github fetch-tickets tests', () => {
let nangoMock;
beforeAll(async () => {
nangoMock = new global.vitest.NangoSyncMock({
dirname: __dirname,
name: "fetch-tickets",
Model: "Ticket"
});
});
it('should correctly transform ticket priorities', async () => {
await createSync.exec(nangoMock);
const savedTickets = await nangoMock.getBatchSaveData('Ticket');
// Custom business logic validation
savedTickets.forEach(ticket => {
// Ensure priority is normalized
expect(['low', 'medium', 'high', 'critical']).toContain(ticket.priority);
// Ensure dates are ISO 8601
expect(ticket.createdAt).toMatch(/^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}/);
// Ensure ticket numbers are prefixed correctly
if (ticket.source === 'github') {
expect(ticket.id).toMatch(/^GH-\d+$/);
}
});
});
it('should filter out spam tickets', async () => {
await createSync.exec(nangoMock);
const savedTickets = await nangoMock.getBatchSaveData('Ticket');
// Verify spam filtering logic
const spamTickets = savedTickets.filter(t =>
t.title.toLowerCase().includes('spam')
);
expect(spamTickets).toHaveLength(0);
});
});
Testing error handling
Test how your integration handles errors:
it('should handle API errors gracefully', async () => {
// Create a mock that will throw an error
const errorMock = new global.vitest.NangoSyncMock({
dirname: __dirname,
name: "fetch-tickets-error",
Model: "Ticket"
});
// Mock the get method to throw
vi.spyOn(errorMock, 'get').mockRejectedValue(
new Error('API rate limit exceeded')
);
// Verify error is logged
const logSpy = vi.spyOn(errorMock, 'log');
await expect(async () => {
await createSync.exec(errorMock);
}).rejects.toThrow('API rate limit exceeded');
expect(logSpy).toHaveBeenCalledWith(
expect.stringContaining('rate limit'),
{ level: 'error' }
);
});
Verify pagination logic works correctly:
it('should fetch all pages of results', async () => {
await createSync.exec(nangoMock);
const savedTickets = await nangoMock.getBatchSaveData('Ticket');
// Verify we got results from multiple pages
// (based on your pagination implementation)
expect(savedTickets.length).toBeGreaterThan(100); // Assuming page size is 100
});
Testing sync functions with checkpoints
Test that checkpoint logic works:
it('should only fetch tickets after checkpoint date', async () => {
const getSpy = vi.spyOn(nangoMock, 'get');
const saveCheckpointSpy = vi.spyOn(nangoMock, 'saveCheckpoint');
// Mock getCheckpoint to return a previous checkpoint
vi.spyOn(nangoMock, 'getCheckpoint').mockResolvedValue({
lastModifiedISO: '2024-01-01T00:00:00Z'
});
await createSync.exec(nangoMock);
// Verify the API request used the checkpoint value
expect(getSpy).toHaveBeenCalledWith(
expect.objectContaining({
params: expect.objectContaining({
since: '2024-01-01T00:00:00Z'
})
})
);
// Verify checkpoint was saved
expect(saveCheckpointSpy).toHaveBeenCalled();
});
Parameterized tests
Test multiple scenarios with different inputs:
import { it, describe, expect, beforeAll } from 'vitest';
describe.each([
{ priority: 'P0', expected: 'critical' },
{ priority: 'P1', expected: 'high' },
{ priority: 'P2', expected: 'medium' },
{ priority: 'P3', expected: 'low' },
])('priority mapping for $priority', ({ priority, expected }) => {
let nangoMock;
beforeAll(async () => {
nangoMock = new global.vitest.NangoSyncMock({
dirname: __dirname,
name: `fetch-tickets-${priority}`,
Model: "Ticket"
});
});
it(`should map ${priority} to ${expected}`, async () => {
await createSync.exec(nangoMock);
const savedTickets = await nangoMock.getBatchSaveData('Ticket');
const ticket = savedTickets.find(t => t.rawPriority === priority);
expect(ticket?.priority).toBe(expected);
});
});