Package Exports
- datadog-logger-integrations
- datadog-logger-integrations/bunyan
- datadog-logger-integrations/consola
- datadog-logger-integrations/electronLog
- datadog-logger-integrations/pino
- datadog-logger-integrations/winston
Readme
Datadog Logger Integrations
"Transport" for popular logger such as Pino, Winston and more!
Features | Installation | API | Contributing
✨ Features
- ✅ Typescript
- ✅ DataDog V2 API
- ✅ Performance, with log batching
- ✅ Lightweight, only 1 dependency(DataDog SDK of course)
- ✅ AWS Lambda support
Motivation
I am using a Lambda-like platform and experiencing log loss. This lib aims to provide a "correct" stream implementation that allows users to wait for the stream to drain before shutting down.
📦 Installation
NPM
npm i datadog-logger-integrationsYarn
yarn add datadog-logger-integrationsPNPM
pnpm i datadog-logger-integrations💻 API
Integrations
Bunyan
import { LogStreamConfig } from 'datadog-logger-integrations'
import { getDataDogStream } from 'datadog-logger-integrations/bunyan'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const stream = getDataDogStream(opts);
const logger = bunyan.createLogger({
name: 'test',
level: 'debug',
stream,
});
logger.info('test');Consola
[!NOTE]
If you are using it with lambda, you must use the Stream API
import { LogStreamConfig } from 'datadog-logger-integrations'
import { getDataDogStream } from 'datadog-logger-integrations/consola'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const logger = createConsola({
reporters: [
new DataDogReporter(opts),
],
});
logger.info('test');Stream API
import { LogStreamConfig } from 'datadog-logger-integrations'
import { getDataDogStream } from 'datadog-logger-integrations/consola'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const stream = getDataDogStream(opts);
const logger = createConsola({
reporters: [
{
log: (logObj) => {
if (!stream.writableEnded)
stream.write(logObj);
},
},
],
});
logger.info('test');Electron Log
[!NOTE]
If you want to wait for logs to flush before process close, you must use the Stream API
import { LogStreamConfig } from 'datadog-logger-integrations'
import { dataDogTransport } from 'datadog-logger-integrations/electronLog'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
logger.transports.datadog = dataDogTransport(
{ level: 'debug' },
opts,
);
logger.info('test');Stream API
import { LogStreamConfig } from 'datadog-logger-integrations'
import { getDataDogStream } from 'datadog-logger-integrations/electronLog'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const stream = getDataDogStream(opts);
transport.level = 'debug' as const;
transport.transforms = [] as TransformFn[];
logger.transports.datadog = transport;
logger.info('test');Pino
[!NOTE]
If you are using it with lambda, you must use the Stream API
import { LogStreamConfig } from 'datadog-logger-integrations'
const options: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const logger = pino(
{},
pino.transport({
target: 'datadog-logger-integrations/pino',
options
}),
);
logger.info('test');With Stream API
import { LogStreamConfig } from 'datadog-logger-integrations'
import { getDataDogStream } from 'datadog-logger-integrations/pino'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const stream = getDataDogStream(opts);
const logger = pino(
{},
pino.multistream([stream]),
);
logger.info('test');Winston
[!NOTE]
If you are using it with lambda, you must use the Stream directly
import { LogStreamConfig } from 'datadog-logger-integrations'
import { DataDogTransport } from 'datadog-logger-integrations/winston'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const logger = winston.createLogger({
transports: [
new DataDogTransport(opts),
],
});
logger.info('test');Stream API
import { LogStreamConfig } from 'datadog-logger-integrations'
import { getDataDogStream } from 'datadog-logger-integrations/winston'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
ddTags: 'env:test',
ddSource: "my source",
service: "my service",
}
const stream = getDataDogStream(opts);
const logger = winston.createLogger({
transports: [
new winston.transports.Stream({
stream,
}),
],
});
logger.info('test');Types
LogStreamConfig
export type LogStreamConfig = {
ddClientConfig?: Record<string, any>; // @datadog/datadog-api-client createConfiguration option
ddServerConfig?: { // @datadog/datadog-api-client setServerVariables option
site?: string;
subdomain?: string;
protocol?: string;
};
ddSource?: string;
ddTags?: string;
service?: string;
sendIntervalMs?: number;
batchSize?: number;
logMessageBuilder?: (
log: Record<string, unknown>,
) => v2.LogsApiSubmitLogRequest['body'][number];
debug?: boolean;
};Use the stream directly
You could integrate to any logger that support writable stream by using the stream directly
But please consider contribute to this repo, so everyone can use it!
import { DataDogWritableStream, LogStreamConfig } from 'datadog-logger-integrations'
const opts: LogStreamConfig = {
ddClientConfig: {
authMethods: {
apiKeyAuth: apiKey,
},
},
}
const logger = fancyLogger({
stream: new DataDogWritableStream({
...opts,
// You must provide your own builder function that takes the logger input and convert to DataDog format
logMessageBuilder: (({ hostname, ...parsedItem }) => ({
ddsource: "some source",
ddtags: "tags",
service: "unknown service",
message: JSON.stringify({
date: new Date().toISOString(),
...parsedItem,
}),
hostname,
}))
})
})Usage with Lambda
Due to the fact that datadog is an async call, it is possible for a short living function to end before the logs are sent out. You will have to manually wait for the stream to end.
These examples are with pino, but other integrations work the same way
Example with Explicit Resource Management, require Node 20.4.0+
import { getDataDogStream } from 'datadog-logger-integrations/pino'
const getLogger = () => {
const stream = getDataDogStream(opts);
const instance = pino(
{},
pino.multistream([stream]),
);
return {
instance,
[Symbol.asyncDispose]: async () => {
// Wait for the stream to fully drain
await new Promise<void>((resolve) => {
stream.on('close', () => {
resolve();
});
stream.end();
});
}
}
}
export const handler = async () => {
await using logger = getLogger();
logger.instance.info("Hello")
return {};
};Example for Node < Node 20.4.0
import { getDataDogStream } from 'datadog-logger-integrations/pino'
export const handler = async () => {
const stream = getDataDogStream(opts);
const logger = pino(
{},
pino.multistream([stream]),
);
logger.info('test');
// Wait for the stream to fully drain
await new Promise<void>((resolve) => {
stream.on('close', () => {
resolve();
});
stream.end();
});
return {};
};
🤝 Contributing
Development
Local Development
pnpm i
pnpm testBuild
pnpm buildRelease
This repo uses Release Please to release.
To release a new version
- Merge your changes into the
mainbranch. - An automated GitHub Action will run, triggering the creation of a Release PR.
- Merge the release PR.
- Wait for the second GitHub Action to run automatically.
- Congratulations, you're all set!