JSPM

@firesystem/s3

1.0.3
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 7
  • Score
    100M100P100Q38036F
  • License MIT

AWS S3 implementation of Virtual File System

Package Exports

  • @firesystem/s3
  • @firesystem/s3/provider

Readme

@firesystem/s3

AWS S3 implementation of the Firesystem Virtual File System. Store and manage files in Amazon S3 buckets with a familiar file system API. Built on top of @firesystem/core's BaseFileSystem, providing full compatibility with the Firesystem ecosystem including reactive events and multi-project workspaces.

Features

  • 🌐 Full S3 Integration - Seamless read/write operations with S3 buckets
  • 🔄 Dual Mode Operation
    • Strict Mode: Full filesystem compatibility with directory markers
    • Lenient Mode: Works with existing S3 buckets without modifications
  • 📁 Virtual Directories - Full directory support using S3 prefixes
  • 🏷️ Rich Metadata - Store custom metadata with S3 object tags
  • 🔍 Prefix Isolation - Scope operations to specific bucket prefixes
  • 📡 Reactive Events - Real-time notifications for all operations
  • 🔐 Full TypeScript - Complete type safety and IntelliSense
  • 🚀 Production Ready - Battle-tested with comprehensive test coverage
  • 🏗️ BaseFileSystem - Extends core BaseFileSystem for consistency
  • 🔌 Workspace Compatible - First-class support for @workspace-fs/core
  • Event System - Full reactive event support via TypedEventEmitter

Installation

npm install @firesystem/s3
# or
yarn add @firesystem/s3
# or
pnpm add @firesystem/s3

Quick Start

Direct Usage

import { S3FileSystem } from "@firesystem/s3";

// Create filesystem instance
const fs = new S3FileSystem({
  bucket: "my-bucket",
  region: "us-east-1",
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  },
});

// Initialize (required for strict mode)
await fs.initialize();

// Use like any filesystem
await fs.writeFile("/hello.txt", "Hello, S3!");
await fs.mkdir("/documents");
await fs.writeFile("/documents/report.pdf", binaryData);
import { WorkspaceFileSystem } from "@workspace-fs/core";
import { s3Provider } from "@firesystem/s3/provider";

// Register S3 provider
const workspace = new WorkspaceFileSystem();
workspace.registerProvider(s3Provider);

// Load S3 project
const project = await workspace.loadProject({
  id: "cloud-storage",
  name: "Cloud Storage",
  source: {
    type: "s3",
    config: {
      bucket: "my-bucket",
      region: "us-east-1",
    },
  },
});

// Use through project
await project.fs.writeFile("/data.json", { value: 42 });

const file = await fs.readFile("/hello.txt");
console.log(file.content); // "Hello, S3!"

const files = await fs.readDir("/documents");
console.log(files); // [{ name: "report.pdf", ... }]

Configuration

Basic Configuration

const fs = new S3FileSystem({
  bucket: "my-bucket", // Required: S3 bucket name
  region: "us-east-1", // Required: AWS region
  credentials: {
    // Required: AWS credentials
    accessKeyId: "...",
    secretAccessKey: "...",
  },
  prefix: "/app/data/", // Optional: Scope to bucket prefix
  mode: "strict", // Optional: "strict" or "lenient"
});

Working with Existing S3 Buckets

Use lenient mode to work seamlessly with existing S3 buckets:

const fs = new S3FileSystem({
  bucket: "existing-bucket",
  region: "us-west-2",
  mode: "lenient", // No directory markers needed
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  },
});

// Works with existing S3 structure
const files = await fs.readDir("/");
// Returns virtual directories inferred from object keys

Prefix Isolation

Isolate your filesystem to a specific bucket prefix:

const fs = new S3FileSystem({
  bucket: "shared-bucket",
  region: "eu-west-1",
  prefix: "/tenants/customer-123/",
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  },
});

// All operations are scoped to the prefix
await fs.writeFile("/config.json", { version: "1.0" });
// Actually writes to: s3://shared-bucket/tenants/customer-123/config.json

S3-Compatible Services

Works with S3-compatible services like MinIO, Wasabi, or DigitalOcean Spaces:

const fs = new S3FileSystem({
  bucket: "my-bucket",
  region: "us-east-1",
  credentials: {
    accessKeyId: "minioadmin",
    secretAccessKey: "minioadmin",
  },
  clientOptions: {
    endpoint: "http://localhost:9000",
    forcePathStyle: true, // Required for MinIO
  },
});

API Examples

File Operations

// Write text file
await fs.writeFile("/notes.txt", "My notes");

// Write JSON
await fs.writeFile("/config.json", {
  name: "myapp",
  version: "1.0.0",
});

// Write binary data
const buffer = new ArrayBuffer(1024);
await fs.writeFile("/data.bin", buffer);

// Read file
const file = await fs.readFile("/notes.txt");
console.log(file.content); // "My notes"
console.log(file.size); // 8
console.log(file.created); // Date object

// Delete file
await fs.deleteFile("/notes.txt");

// Check existence
const exists = await fs.exists("/notes.txt"); // false

Directory Operations

// Create directory
await fs.mkdir("/projects");

// Create nested directories
await fs.mkdir("/projects/2024/january", true);

// List directory contents
const entries = await fs.readDir("/projects");
// [
//   { name: "2024", type: "directory", ... }
// ]

// Remove empty directory
await fs.rmdir("/projects/temp");

// Remove directory recursively
await fs.rmdir("/projects/old", true);

Advanced Operations

// Copy files
await fs.copy("/template.docx", "/documents/new.docx");

// Move/rename files
await fs.rename("/old-name.txt", "/new-name.txt");

// Move multiple files
await fs.move(["/file1.txt", "/file2.txt"], "/archive/");

// Get file stats
const stats = await fs.stat("/large-file.zip");
console.log(stats.size); // File size in bytes
console.log(stats.modified); // Last modified date

// Search with glob patterns
const jsFiles = await fs.glob("**/*.js");
const testFiles = await fs.glob("**/test-*.js");
const rootFiles = await fs.glob("*"); // Root level only

Reactive Events

S3FileSystem extends BaseFileSystem and provides a full reactive event system:

// File operation events
fs.events.on(FileSystemEvents.FILE_WRITTEN, ({ path, size }) => {
  console.log(`File ${path} uploaded to S3 (${size} bytes)`);
});

fs.events.on(FileSystemEvents.FILE_READ, ({ path, size }) => {
  console.log(`File ${path} downloaded from S3 (${size} bytes)`);
});

fs.events.on(FileSystemEvents.FILE_DELETED, ({ path }) => {
  console.log(`File ${path} removed from S3`);
});

// Operation tracking
fs.events.on(FileSystemEvents.OPERATION_START, ({ operation, path, id }) => {
  console.log(`Starting ${operation} on ${path}`);
});

fs.events.on(
  FileSystemEvents.OPERATION_END,
  ({ operation, path, duration }) => {
    console.log(`Completed ${operation} on ${path} in ${duration}ms`);
  },
);

fs.events.on(FileSystemEvents.OPERATION_ERROR, ({ operation, path, error }) => {
  console.error(`Operation ${operation} failed on ${path}:`, error);
});

// Initialization events
fs.events.on(FileSystemEvents.INITIALIZED, ({ duration }) => {
  console.log(`S3 filesystem initialized in ${duration}ms`);
});

// Watch for changes (client-side simulation)
const watcher = fs.watch("**/*.json", (event) => {
  console.log(`File ${event.path} was ${event.type}`);
});

// Stop watching
watcher.dispose();

Custom Metadata

// Write file with metadata
await fs.writeFile("/document.pdf", pdfBuffer, {
  tags: ["important", "contract"],
  author: "John Doe",
  department: "Legal",
});

// Read file with metadata
const file = await fs.readFile("/document.pdf");
console.log(file.metadata);
// { tags: ["important", "contract"], author: "John Doe", ... }

Mode Comparison

Feature Strict Mode Lenient Mode
Directory markers Creates .../ objects Virtual only
Parent directory check Required Not enforced
Existing S3 compatibility Requires markers Works with any
Performance More S3 requests Fewer requests
Best for New applications Existing buckets

Implementation Details

Architecture

S3FileSystem extends BaseFileSystem from @firesystem/core, inheriting:

  • Standard permission checks (canModify, canCreateIn)
  • Atomic write simulation via temp files
  • Consistent error handling
  • Path normalization utilities

Directory Handling

  • Strict Mode: Creates empty objects with "/" suffix as directory markers
  • Lenient Mode: Directories are virtual and inferred from object prefixes

Content Handling

  • JSON Objects: Automatically stringified on write and parsed on read
  • Binary Content: ArrayBuffer is encoded as base64 for storage
  • Text Content: Stored as-is in UTF-8 encoding
  • Large Files: Supports up to 5TB with multipart upload (future enhancement)

Metadata Storage

Firesystem metadata is stored as S3 object metadata:

  • x-amz-meta-type: "file" or "directory"
  • x-amz-meta-created: ISO date string
  • x-amz-meta-modified: ISO date string
  • x-amz-meta-custom: JSON stringified custom metadata

Event System

Full reactive event support via TypedEventEmitter:

  • Operation lifecycle events (start, end, error)
  • File operation events (read, written, deleted)
  • Directory operation events (created, deleted)
  • Storage events (cleared, size calculated)
  • Initialization events (initializing, initialized)

Performance Tips

  1. Use prefixes to limit the scope of list operations
  2. Enable lenient mode for existing buckets to reduce requests
  3. Batch operations when possible to minimize API calls
  4. Cache frequently accessed files locally
  5. Use glob patterns carefully - they require listing many objects

Testing

The package includes comprehensive test coverage:

  • Core functionality: 100% tested
  • S3-specific features: Fully tested
  • Cross-provider compatibility: 87% of shared tests passing

Limitations

  1. Large Files: Currently loads entire file content into memory
  2. List Performance: S3 LIST operations can be slow with many objects
  3. Atomic Operations: S3 doesn't support true atomic operations
  4. Permissions: S3 permissions are not mapped to file system permissions
  5. Watch Events: File watching is client-side only (no server push from S3)
  6. Case Sensitivity: S3 keys are case-sensitive, unlike some file systems

Workspace Integration

S3FileSystem is a first-class citizen in the Firesystem workspace ecosystem. This enables powerful multi-project workflows with S3 storage.

Using S3 Provider

import { WorkspaceFileSystem } from "@workspace-fs/core";
import { s3Provider } from "@firesystem/s3/provider";

// Setup workspace
const workspace = new WorkspaceFileSystem();
workspace.registerProvider(s3Provider);
await workspace.initialize();

// Load multiple S3 projects
const production = await workspace.loadProject({
  id: "prod-data",
  name: "Production Data",
  source: {
    type: "s3",
    config: {
      bucket: "prod-bucket",
      region: "us-east-1",
      mode: "lenient", // Works with existing S3 data
    },
  },
});

const backup = await workspace.loadProject({
  id: "backup-data",
  name: "Backup Storage",
  source: {
    type: "s3",
    config: {
      bucket: "backup-bucket",
      region: "us-west-2",
      prefix: "/daily-backups/",
    },
  },
});

Cross-Project Operations

// Copy between S3 buckets
const data = await production.fs.readFile("/current/data.json");
await backup.fs.writeFile(`/backup-${Date.now()}.json`, data.content);

// Sync from production to backup
await workspace.copyFiles(
  "prod-data",
  "/reports/*.pdf",
  "backup-data",
  "/reports/",
);

// Mix S3 with other storage types
const local = await workspace.loadProject({
  id: "local-cache",
  source: { type: "indexeddb", config: { dbName: "cache" } },
});

// Download from S3 to local browser storage
const s3File = await production.fs.readFile("/large-dataset.json");
await local.fs.writeFile("/cached-dataset.json", s3File.content);

Environment Variables

The S3 provider supports credential resolution from environment:

# AWS credentials
export AWS_ACCESS_KEY_ID=your_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_REGION=us-east-1

# Or Firesystem-specific (takes precedence)
export FIRESYSTEM_S3_ACCESS_KEY_ID=your_key_id
export FIRESYSTEM_S3_SECRET_ACCESS_KEY=your_secret_key
export FIRESYSTEM_S3_REGION=us-east-1

Provider Capabilities

const provider = workspace.getProvider("s3");
console.log(provider.getCapabilities());
// {
//   readonly: false,
//   caseSensitive: true,
//   atomicRename: false,
//   supportsWatch: false,
//   supportsMetadata: true,
//   supportsGlob: false,
//   maxFileSize: 5497558138880, // 5TB
//   maxPathLength: 1024,
//   description: "AWS S3 cloud storage with eventual consistency..."
// }

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT © Anderson D. Rosa

See Also