Package Exports
- @firesystem/s3
Readme
@firesystem/s3
AWS S3 implementation of the Firesystem Virtual File System. Store and manage files in Amazon S3 buckets with a familiar file system API.
Features
- 🌐 Full S3 Integration - Seamless read/write operations with S3 buckets
- 🔄 Dual Mode Operation
- Strict Mode: Full filesystem compatibility with directory markers
- Lenient Mode: Works with existing S3 buckets without modifications
- 📁 Virtual Directories - Full directory support using S3 prefixes
- 🏷️ Rich Metadata - Store custom metadata with S3 object tags
- 🔍 Prefix Isolation - Scope operations to specific bucket prefixes
- 📡 Reactive Events - Real-time notifications for all operations
- 🔐 Full TypeScript - Complete type safety and IntelliSense
- 🚀 Production Ready - Battle-tested with comprehensive test coverage
Installation
npm install @firesystem/s3
# or
yarn add @firesystem/s3
# or
pnpm add @firesystem/s3
Quick Start
import { S3FileSystem } from "@firesystem/s3";
// Create filesystem instance
const fs = new S3FileSystem({
bucket: "my-bucket",
region: "us-east-1",
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});
// Initialize (required for strict mode)
await fs.initialize();
// Use like any filesystem
await fs.writeFile("/hello.txt", "Hello, S3!");
await fs.mkdir("/documents");
await fs.writeFile("/documents/report.pdf", binaryData);
const file = await fs.readFile("/hello.txt");
console.log(file.content); // "Hello, S3!"
const files = await fs.readDir("/documents");
console.log(files); // [{ name: "report.pdf", ... }]
Configuration
Basic Configuration
const fs = new S3FileSystem({
bucket: "my-bucket", // Required: S3 bucket name
region: "us-east-1", // Required: AWS region
credentials: { // Required: AWS credentials
accessKeyId: "...",
secretAccessKey: "..."
},
prefix: "/app/data/", // Optional: Scope to bucket prefix
mode: "strict" // Optional: "strict" or "lenient"
});
Working with Existing S3 Buckets
Use lenient
mode to work seamlessly with existing S3 buckets:
const fs = new S3FileSystem({
bucket: "existing-bucket",
region: "us-west-2",
mode: "lenient", // No directory markers needed
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});
// Works with existing S3 structure
const files = await fs.readDir("/");
// Returns virtual directories inferred from object keys
Prefix Isolation
Isolate your filesystem to a specific bucket prefix:
const fs = new S3FileSystem({
bucket: "shared-bucket",
region: "eu-west-1",
prefix: "/tenants/customer-123/",
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});
// All operations are scoped to the prefix
await fs.writeFile("/config.json", { version: "1.0" });
// Actually writes to: s3://shared-bucket/tenants/customer-123/config.json
S3-Compatible Services
Works with S3-compatible services like MinIO, Wasabi, or DigitalOcean Spaces:
const fs = new S3FileSystem({
bucket: "my-bucket",
region: "us-east-1",
credentials: {
accessKeyId: "minioadmin",
secretAccessKey: "minioadmin"
},
clientOptions: {
endpoint: "http://localhost:9000",
forcePathStyle: true // Required for MinIO
}
});
API Examples
File Operations
// Write text file
await fs.writeFile("/notes.txt", "My notes");
// Write JSON
await fs.writeFile("/config.json", {
name: "myapp",
version: "1.0.0"
});
// Write binary data
const buffer = new ArrayBuffer(1024);
await fs.writeFile("/data.bin", buffer);
// Read file
const file = await fs.readFile("/notes.txt");
console.log(file.content); // "My notes"
console.log(file.size); // 8
console.log(file.created); // Date object
// Delete file
await fs.deleteFile("/notes.txt");
// Check existence
const exists = await fs.exists("/notes.txt"); // false
Directory Operations
// Create directory
await fs.mkdir("/projects");
// Create nested directories
await fs.mkdir("/projects/2024/january", true);
// List directory contents
const entries = await fs.readDir("/projects");
// [
// { name: "2024", type: "directory", ... }
// ]
// Remove empty directory
await fs.rmdir("/projects/temp");
// Remove directory recursively
await fs.rmdir("/projects/old", true);
Advanced Operations
// Copy files
await fs.copy("/template.docx", "/documents/new.docx");
// Move/rename files
await fs.rename("/old-name.txt", "/new-name.txt");
// Move multiple files
await fs.move(["/file1.txt", "/file2.txt"], "/archive/");
// Get file stats
const stats = await fs.stat("/large-file.zip");
console.log(stats.size); // File size in bytes
console.log(stats.modified); // Last modified date
// Search with glob patterns
const jsFiles = await fs.glob("**/*.js");
const testFiles = await fs.glob("**/test-*.js");
const rootFiles = await fs.glob("*"); // Root level only
Reactive Events
// Listen to filesystem events
fs.events.on("file:written", ({ path, size }) => {
console.log(`File ${path} uploaded to S3 (${size} bytes)`);
});
fs.events.on("file:deleted", ({ path }) => {
console.log(`File ${path} removed from S3`);
});
fs.events.on("operation:error", ({ operation, error }) => {
console.error(`Operation ${operation} failed:`, error);
});
// Watch for changes (client-side simulation)
const watcher = fs.watch("**/*.json", (event) => {
console.log(`File ${event.path} was ${event.type}`);
});
// Stop watching
watcher.dispose();
Custom Metadata
// Write file with metadata
await fs.writeFile("/document.pdf", pdfBuffer, {
tags: ["important", "contract"],
author: "John Doe",
department: "Legal"
});
// Read file with metadata
const file = await fs.readFile("/document.pdf");
console.log(file.metadata);
// { tags: ["important", "contract"], author: "John Doe", ... }
Mode Comparison
Feature | Strict Mode | Lenient Mode |
---|---|---|
Directory markers | Creates .../ objects |
Virtual only |
Parent directory check | Required | Not enforced |
Existing S3 compatibility | Requires markers | Works with any |
Performance | More S3 requests | Fewer requests |
Best for | New applications | Existing buckets |
Implementation Details
Directory Handling
- Strict Mode: Creates empty objects with "/" suffix as directory markers
- Lenient Mode: Directories are virtual and inferred from object prefixes
Binary Content
Binary content (ArrayBuffer) is automatically encoded as base64 for storage and decoded when reading.
Metadata Storage
Firesystem metadata is stored as S3 object metadata:
x-amz-meta-type
: "file" or "directory"x-amz-meta-created
: ISO date stringx-amz-meta-modified
: ISO date stringx-amz-meta-custom
: JSON stringified custom metadata
Performance Tips
- Use prefixes to limit the scope of list operations
- Enable lenient mode for existing buckets to reduce requests
- Batch operations when possible to minimize API calls
- Cache frequently accessed files locally
- Use glob patterns carefully - they require listing many objects
Testing
The package includes comprehensive test coverage:
- ✅ Core functionality: 100% tested
- ✅ S3-specific features: Fully tested
- ✅ Cross-provider compatibility: 87% of shared tests passing
Limitations
- Large Files: Currently loads entire file content into memory
- List Performance: S3 LIST operations can be slow with many objects
- Atomic Operations: S3 doesn't support true atomic operations
- Permissions: S3 permissions are not mapped to file system permissions
- Watch Events: File watching is client-side only (no server push from S3)
- Case Sensitivity: S3 keys are case-sensitive, unlike some file systems
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT © Anderson D. Rosa
See Also
- @firesystem/core - Core interfaces
- @firesystem/memory - In-memory implementation
- @firesystem/indexeddb - Browser storage
- @firesystem/workspace - Multi-project support