Application Architecture
Bunty applications all start from a single foundation β the App Container, created with createApp(). Every other runtime (HTTP servers, workers, CLI tools, headless daemons) attaches to this same container as a subsystem.
The Core Concept
createApp() builds a dependency-injected, configuration-driven core that can operate in any mode:
- π« Headless mode β Run logic directly, no HTTP or timers
- π HTTP mode β Mount APIs using
createHttpServer() - βοΈ Worker mode β Schedule jobs using
createWorker()
All share the same DI container, config, and lifecycle hooks.
Architecture Diagram
Three Layers
Bunty applications are built in three distinct layers:
1. Application Layer (Core)
The foundation created with createApp():
import { createApp } from '@bunty/common';
const app = createApp({
name: 'my-app',
providers: [
// Register all services
UserService,
OrderService,
DatabaseService,
],
});
await app.start();
Responsibilities:
- Initialize DI container
- Load configuration
- Manage service lifecycle
- Provide shared context
2. HTTP Layer (Optional)
Web APIs built with createHttpServer():
import { createHttpServer } from '@bunty/http';
const http = createHttpServer(app, {
port: 3000,
routes: './routes/**/*.ts',
});
await http.start();
Responsibilities:
- Handle HTTP requests/responses
- Route management
- Middleware execution
- REST/GraphQL APIs
3. Worker Layer (Optional)
Background tasks built with createWorker():
import { createWorker } from '@bunty/worker';
const worker = createWorker({
app,
interval: '10m',
tasks: [DataSyncWorker],
});
await worker.start();
Responsibilities:
- Scheduled jobs
- Background processing
- Queue consumers
- Data synchronization
Unified Pattern
All three layers work together seamlessly:
import { createApp } from '@bunty/common';
import { createHttpServer } from '@bunty/http';
import { createWorker } from '@bunty/worker';
// 1. Create core application
const app = createApp({
name: 'bunty-app',
providers: [
UserService,
OrderService,
PaymentService,
DatabaseService,
CacheService,
],
});
// 2. Attach HTTP server
const http = createHttpServer(app, {
port: 3000,
routes: './routes',
});
// 3. Attach background worker
const worker = createWorker({
app,
interval: '5m',
tasks: [OrderProcessorWorker],
});
// 4. Start everything
await app.start();
await http.start();
await worker.start();
// All share:
// β
One DI container
// β
One configuration source
// β
Unified lifecycle and logging
Example: E-Commerce System
Hereβs a complete e-commerce application using all three layers:
import { createApp, Injectable } from '@bunty/common';
import { createHttpServer } from '@bunty/http';
import { createWorker } from '@bunty/worker';
// Application Layer - Business Logic
@Injectable()
class OrderService {
constructor(
private db: DatabaseService,
private payment: PaymentService,
private email: EmailService
) {}
async createOrder(data: CreateOrderDto) {
const order = await this.db.insert(ordersTable, data);
await this.payment.charge(order);
await this.email.sendConfirmation(order);
return order;
}
}
// HTTP Layer - API Endpoints
const app = createApp({
name: 'ecommerce',
providers: [OrderService, PaymentService, EmailService],
});
const http = createHttpServer(app, { port: 3000 });
http.post('/api/orders', async (req, res) => {
const orderService = inject(OrderService);
const order = await orderService.createOrder(req.body);
return res.status(201).json(order);
});
// Worker Layer - Background Jobs
@Injectable()
class OrderProcessorWorker {
constructor(private orderService: OrderService) {}
async run() {
// Process pending orders every 5 minutes
const pending = await this.getPendingOrders();
for (const order of pending) {
await this.orderService.processOrder(order.id);
}
}
}
const worker = createWorker({
app,
interval: '5m',
tasks: [OrderProcessorWorker],
});
// Start all layers
await app.start();
await http.start();
await worker.start();
Headless Mode
For applications that donβt need HTTP or scheduling:
import { createApp } from '@bunty/common';
const app = createApp({
name: 'migration-tool',
providers: [DatabaseService, MigrationService],
});
await app.start(async (container) => {
const migrator = container.get(MigrationService);
await migrator.executeAll();
await app.shutdown();
});
Perfect for:
- Data migrations
- ETL or import/export utilities
- CLI tools
- Event stream consumers
- One-off scripts
Data Sync Example
Complete headless data synchronization application:
import { createApp, createWorker, Injectable } from '@bunty/common';
@Injectable()
class DataSourceService {
async fetchRecords() {
// Fetch from external API
const response = await fetch('https://api.external.com/records');
return response.json();
}
}
@Injectable()
class StorageService {
constructor(private db: DatabaseService) {}
async saveBatch(records: any[]) {
return await this.db.insertMany(recordsTable, records);
}
}
@Injectable()
class DataSyncService {
constructor(
private source: DataSourceService,
private storage: StorageService,
private logger: Logger
) {
this.logger.setName('DataSync');
}
async run() {
this.logger.info('Starting data sync...');
const records = await this.source.fetchRecords();
this.logger.debug('Fetched records:', records.length);
const saved = await this.storage.saveBatch(records);
this.logger.success(`Synced ${saved.length} records`);
}
}
// Create base application
const app = createApp({
name: 'data-sync',
providers: [
DataSourceService,
StorageService,
DataSyncService,
DatabaseService,
],
});
// Mount worker (runs every 15 minutes)
const worker = createWorker({
app,
interval: '15m',
run: async (container) => {
const sync = container.get(DataSyncService);
await sync.run();
},
});
// Start application
await app.start();
await worker.start();
// Graceful shutdown on signal
process.on('SIGINT', async () => {
await worker.stop();
await app.shutdown();
process.exit(0);
});
Lifecycle Management
All layers share a unified lifecycle:
const app = createApp({
name: 'my-app',
providers: [...],
// Application lifecycle hooks
async onInit() {
console.log('Application initializing...');
},
async onReady() {
console.log('Application ready!');
},
async onShutdown() {
console.log('Application shutting down...');
await this.cleanup();
},
});
// HTTP lifecycle
const http = createHttpServer(app, {
async onStart() {
console.log('HTTP server starting...');
},
async onReady() {
console.log('HTTP server ready!');
},
});
// Worker lifecycle
const worker = createWorker({
app,
async onInit() {
console.log('Worker initializing...');
},
async onShutdown() {
console.log('Worker shutting down...');
},
});
Detailed Lifecycle Flow
1. Application Initialization (app.start())
What happens:
- Dependency injection container is created and configured
- All registered providers are instantiated in dependency order
- Configuration is loaded from environment variables and config files
- Database connections are established and validated
- Core services are initialized but not yet active
async onInit() {
// Example: Initialize database connections
await this.databaseService.connect();
// Example: Load external configuration
await this.configService.loadRemoteConfig();
// Example: Initialize shared resources
await this.cacheService.initialize();
}
2. Subsystem Initialization (http.onStart() / worker.onInit())
What happens:
- HTTP server binds to ports and prepares routing tables
- Worker tasks are registered and scheduling is configured
- Middleware chains are built and validated
- Health check endpoints are activated
- External service connections are established
// HTTP subsystem initialization
async onStart() {
// Compile and validate all routes
await this.routeManager.compileRoutes();
// Initialize middleware chain
await this.middlewareManager.setup();
// Start health check endpoints
await this.healthService.start();
}
// Worker subsystem initialization
async onInit() {
// Register scheduled tasks
await this.scheduler.registerTasks();
// Initialize job queues
await this.queueManager.setup();
// Prepare worker pools
await this.workerPool.initialize();
}
3. Application Ready (app.onReady())
What happens:
- All core services are fully initialized and ready
- Dependencies are resolved and validated
- System health checks pass
- Application state is marked as βreadyβ
- Ready signal is sent to container orchestrator
async onReady() {
// Verify all critical services are healthy
await this.healthChecker.verifyServices();
// Mark application as ready for traffic
this.applicationState.setReady(true);
// Signal container orchestrator (K8s readiness probe)
await this.signalReady();
console.log('π Application fully ready for requests');
}
4. HTTP Server Ready (http.onReady())
What happens:
- HTTP server is listening on configured ports
- All routes are active and accepting requests
- Load balancer health checks will pass
- Prometheus metrics endpoints are active
async onReady() {
console.log(`π HTTP server listening on port ${this.config.port}`);
// Register with service discovery
await this.serviceRegistry.register();
// Start accepting requests
this.server.ready = true;
}
5. Runtime Operation
What happens:
- HTTP requests are processed through middleware chains
- Worker tasks execute on their scheduled intervals
- Database queries and external API calls are made
- Metrics and logs are collected
- Health checks continue monitoring system state
6. Shutdown Signal Received (SIGTERM, SIGINT)
What happens:
- Graceful shutdown process begins
- No new requests are accepted (HTTP server stops listening)
- Existing connections are allowed to complete
- Worker tasks are allowed to finish current execution
- Kubernetes Integration: Respects termination grace period
// Graceful shutdown handler
process.on('SIGTERM', async () => {
console.log('π€ Received SIGTERM, starting graceful shutdown...');
// Stop accepting new requests immediately
await http.stopAcceptingConnections();
// Wait for pending operations with timeout
await app.shutdown({
timeout: 30000, // 30 second timeout
waitForPending: true // Wait for active requests/tasks
});
});
7. Subsystem Shutdown (http.onShutdown() / worker.onShutdown())
What happens:
- HTTP server stops accepting new connections
- Active HTTP requests are given time to complete (default: 30 seconds)
- Worker tasks finish their current execution cycles
- Job queues are drained or paused
- External connections are gracefully closed
// HTTP subsystem shutdown
async onShutdown() {
console.log('π HTTP server shutting down...');
// Stop accepting new connections
this.server.close();
// Wait for active requests to complete
await this.waitForActiveRequests(30000);
// Close persistent connections
await this.connectionManager.closeAll();
console.log('β
HTTP server shutdown complete');
}
// Worker subsystem shutdown
async onShutdown() {
console.log('βΈοΈ Worker shutting down...');
// Stop scheduling new tasks
this.scheduler.stop();
// Wait for running tasks to complete
await this.waitForActiveTasks(30000);
// Cleanup worker resources
await this.workerPool.destroy();
console.log('β
Worker shutdown complete');
}
8. Application Shutdown (app.onShutdown())
What happens:
- All subsystems have been stopped
- Database connections are closed
- Cache connections are terminated
- Temporary files and resources are cleaned up
- Final logging and metrics are sent
async onShutdown() {
console.log('π§Ή Application final cleanup...');
// Close database connections
await this.databaseService.disconnect();
// Close cache connections
await this.cacheService.disconnect();
// Flush remaining logs
await this.logger.flush();
// Send final metrics
await this.metrics.send();
console.log('β
Application shutdown complete');
}
9. Process Exit
What happens:
- All cleanup is complete
- Process exits with appropriate exit code
- Container orchestrator receives exit signal
- New instances can be started if needed
Container-Aware Graceful Shutdown
Bunty applications are designed for production container environments with proper shutdown handling:
import { createApp } from '@bunty/common';
import { createHttpServer } from '@bunty/http';
import { createWorker } from '@bunty/worker';
const app = createApp({
name: 'production-app',
providers: [...],
// Configure graceful shutdown for containers
shutdown: {
timeout: 30000, // Maximum shutdown time (K8s terminationGracePeriodSeconds)
waitForPending: true, // Wait for active requests/tasks to complete
forceExitAfter: 35000, // Force exit after this time (5s buffer)
signals: ['SIGTERM', 'SIGINT'], // Handle container signals
}
});
const http = createHttpServer(app, {
port: 3000,
shutdown: {
keepAliveTimeout: 5000, // Keep connections alive during shutdown
requestTimeout: 25000, // Maximum time for requests to complete
closeConnections: true, // Close idle connections immediately
}
});
const worker = createWorker({
app,
interval: '5m',
shutdown: {
taskTimeout: 25000, // Maximum time for tasks to complete
drainQueue: true, // Process remaining queued items
saveState: true, // Persist worker state before exit
}
});
await app.start();
await http.start();
await worker.start();
// Kubernetes-ready shutdown handling
process.on('SIGTERM', async () => {
console.log('π€ SIGTERM received from Kubernetes');
try {
// Graceful shutdown with proper timeout handling
await Promise.race([
app.shutdown(),
new Promise((_, reject) =>
setTimeout(() => reject(new Error('Shutdown timeout')), 30000)
)
]);
console.log('β
Graceful shutdown completed');
process.exit(0);
} catch (error) {
console.error('β Shutdown error:', error);
process.exit(1);
}
});
// Handle immediate shutdown requests
process.on('SIGINT', async () => {
console.log('π€ SIGINT received, forcing shutdown');
process.exit(0);
});
Production Benefits
Kubernetes Integration:
- Respects
terminationGracePeriodSecondsin pod specs - Handles
SIGTERMsignals from kubelet properly - Ensures zero-downtime deployments with rolling updates
- Prevents connection drops during pod replacement
Docker Integration:
- Responds to
docker stopcommands gracefully - Handles container lifecycle events properly
- Ensures data consistency during container restart
Load Balancer Integration:
- Removes instances from load balancer rotation before shutdown
- Completes in-flight requests before terminating
- Prevents 502/503 errors during deployments
Example Kubernetes Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bunty-app
spec:
template:
spec:
terminationGracePeriodSeconds: 30 # Matches app shutdown timeout
containers:
- name: app
image: bunty-app:latest
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 5"] # Allow load balancer to update
Deployment Strategies
Monolith (All-in-One)
// src/main.ts
const app = createApp({ name: 'monolith', providers: [...] });
const http = createHttpServer(app, { port: 3000 });
const worker = createWorker({ app, interval: '5m', tasks: [...] });
await app.start();
await http.start();
await worker.start();
Pros:
- Simple deployment
- Low infrastructure cost
- Easy local development
Separated Services
HTTP Service:
// apps/api/main.ts
const app = createApp({ name: 'api', providers: [...] });
const http = createHttpServer(app, { port: 3000 });
await app.start();
await http.start();
Worker Service:
// apps/worker/main.ts
const app = createApp({ name: 'worker', providers: [...] });
const worker = createWorker({ app, interval: '5m', tasks: [...] });
await app.start();
await worker.start();
Pros:
- Independent scaling
- Isolated failures
- Resource optimization
Hybrid (Mix & Match)
// Service 1: HTTP + Critical Workers
const app1 = createApp({ name: 'api', providers: [...] });
const http = createHttpServer(app1, { port: 3000 });
const criticalWorker = createWorker({ app: app1, interval: '1m', tasks: [...] });
// Service 2: Heavy Background Processing
const app2 = createApp({ name: 'batch', providers: [...] });
const batchWorker = createWorker({ app: app2, interval: '1h', tasks: [...] });
Pros:
- Best of both worlds
- Critical workers stay with API
- Heavy jobs run separately
ποΈ Architecture Summary
Bunty uses a single, extensible application container pattern. createApp() initializes the core runtime, while createHttpServer() and createWorker() attach additional behaviors. Whether running a REST API, a background processor, or a headless tool, every component operates within the same dependency injection graph and configuration context. This design gives developers one consistent mental model to build everything β from modular monoliths to distributed systems β with no code duplication.
Next Steps
- Learn about the Application Layer in detail
- Explore the HTTP Layer for building APIs
- Understand the Worker Layer for background tasks
- See Dependency Injection for service management
- Check out Workers for scheduling patterns