HatMax is a collection of Go packages for web applications: lifecycle management, authentication, background jobs, pubsub, htmx components, image handling, email delivery, and supporting utilities. The packages compose independently and default to Postgres where storage is needed.
The goal is consistent wiring across projects: the same patterns applied the same way. Use the auth package without the scheduler, the scheduler without pubsub, wire them together when you need both.
Most Go web projects share the same needs: lifecycle management, authentication, background jobs, events. The differences are in the edges: how sessions are stored, whether 2FA is required, which email provider you use. HatMax extracts the common structure while leaving those edges pluggable. Postgres handles persistence by default, because adding Redis or RabbitMQ for jobs and events is operational overhead most projects don't need.
Core
Every Go application needs to start things in order, stop them cleanly when interrupted, and wire routes to handlers. These three packages handle that foundation.
The app package handles application lifecycle management through interface detection:
type Startable interface {
Start(context.Context) error
}
type Stoppable interface {
Stop(context.Context) error
}
type RouteRegistrar interface {
RegisterRoutes(chi.Router)
}
Components implement whichever interfaces they need. The setup function inspects dependencies and groups them:
starts, stops, registrars := app.Setup(ctx, router,
database,
eventBroker,
scheduler,
userHandler,
)
If component N fails to start, components 0 through N-1 stop in reverse order. Shutdown happens in LIFO order.
The config package handles environment-based configuration loading. It reads from YAML files, environment variables, and command-line arguments with a defined precedence. The log package provides structured logging built on slog, integrating with config for log level and format settings.
Web
HatMax assumes htmx as the frontend layer. The packages here provide type-safe attribute builders, response helpers, and UI components designed for partial page updates rather than full SPA patterns.
The htmx package provides type-safe htmx abstractions. Primitives for triggers, actions, targets, swaps, and response headers:
attrs := htmx.HX().
Post("/items").
TargetID("list").
SwapOuter().
Confirm("Are you sure?")
Triggers support delays and throttling:
htmx.OnKeyup().Throttle(300 * time.Millisecond)
htmx.Every(5 * time.Second)
Response helpers for server-side control:
if htmx.IsHTMXRequest(r) {
htmx.Retarget(w, "#notifications")
htmx.TriggerEvent(w, "itemAdded")
}
The ui package provides an htmx-first component kit. Includes Chip, Label, Button, Alert, Flash, Toast, Link, Form, Table, and Nav components:
btn := ui.NewButton("Save").Emoji(ui.EmojiSave).Primary()
btn := ui.NewButton("Load").HX().Get("/data").TargetID("result").Done()
Delete buttons render as forms rather than links, preventing accidental bot triggers:
deleteBtn := ui.NewDeleteButton("Delete", "/items/123").
CSRFToken(token).
Confirm("Are you sure?")
Auth and Security
Authentication starts simple (email/password, session tokens) but requirements grow. These packages provide primitives that scale from basic login to TOTP enforcement with grace periods, without requiring you to adopt the complex path upfront.
svc := auth.NewService(queries, cfg, log)
session, err := svc.Signin(ctx, email, password)
user, err := svc.ValidateSession(ctx, token)
Middleware for requiring or optionally checking authentication:
r.Use(auth.RequireAuth(svc))
r.Use(auth.OptionalAuth(svc))
TOTP enforcement with grace periods:
r.Use(auth.RequireTOTP(auth.TOTPEnforcement{
Enabled: func() bool { return settings.GetBool(ctx, "security.require_2fa") },
GraceDays: func() int { return settings.GetInt(ctx, "security.2fa_grace_period_days") },
SetupURL: "/settings/2fa",
}))
The crypto package provides PASETO v4 tokens for stateless auth scenarios, AES-256-GCM encryption for sensitive data, and Argon2id password hashing. TOTP support includes key generation, QR code rendering, and backup codes:
key, _ := crypto.GenerateTOTPKey("MyApp", "user@example.com")
png, _ := crypto.GenerateQRCodePNG(key, 200)
valid := crypto.ValidateTOTPCode(secret, code)
plain, hashed, _ := crypto.GenerateBackupCodes(8)
Data
PostgreSQL as the primary datastore, with packages for connection management, migrations, validation, and seeding. No ORM; these are utilities that work with sqlc or hand-written queries.
The validation package provides field validation with a fluent API. Collects errors into a structure suitable for form rendering:
v := validation.New()
v.Check(user.Email).Required().Email()
v.Check(user.Age).Min(18)
if v.HasErrors() {
return v.Errors()
}
The seed package handles database seeding with tracking to avoid duplicates. Symbolic references (Ref/RefMap) allow seeds to reference each other by name rather than hardcoded IDs.
Media
Image handling with variant generation and pluggable storage. The processing layer is separate from storage, so you can resize locally and store in S3, or use different processors for different environments.
The image package supports original, large, medium, and thumbnail sizes with pluggable storage and processing backends. Local filesystem and AWS S3 storage implementations are included. The standard library processor handles basic resize operations without external dependencies.
Infrastructure
Email, events, background jobs, telemetry. The services most applications eventually need, with pluggable backends and test doubles.
The mailer package provides email delivery with pluggable providers. Supports SMTP, SendGrid, AWS SES, and Mailgun. Noop implementation for testing:
mail := mailer.New(cfg, logger)
mail.Send(ctx, mailer.Message{
To: "user@example.com",
Subject: "Welcome",
Body: body,
})
The pubsub package provides publish/subscribe messaging with fan-out semantics. Named subscribers resume from last offset. At-least-once delivery:
broker := postgres.NewBroker(db, cfg, log)
broker.Publish(ctx, "user.created", envelope)
broker.Subscribe(ctx, "user.created", handler, pubsub.SubscribeOptions{
SubscriberID: "email-sender",
})
The scheduler package handles job scheduling with pluggable storage backends. Configurable worker pools for parallel execution:
sched := scheduler.New(store, cfg, logger)
sched.Register("send-email", func(ctx context.Context, job scheduler.Job) scheduler.Result {
return scheduler.Result{Output: map[string]any{"sent": true}}
})
sched.Start(ctx)
Schedule types with timezone support:
daily := scheduler.Daily{Hour: 9, Minute: 0}
weekly := scheduler.Weekly{Day: time.Friday, Hour: 17, TZ: loc}
interval := scheduler.Interval{Every: 30 * time.Minute}
Fakes for testing:
store := scheduler.NewFakeStore()
clock := scheduler.NewFakeClock(baseTime)
sched := scheduler.New(store, cfg, log)
sched.SetClock(clock)
store.AddJob(job)
sched.Tick(ctx)
clock.Advance(time.Hour)
What Comes Next
There's a direction I've been exploring: a generator that produces code using these blocks, where you add a model, add a handler, wire a route, and the code updates incrementally rather than regenerating everything. The architecture would have a CLI layer for deterministic operations with a conversational interface on top that translates what you describe into CLI commands.
I've written about the limitations of generators in a previous iteration of this project. Generators freeze decisions too early and create maintenance burden when patterns evolve, but one that understands the underlying blocks and can be invoked incrementally might avoid those traps. That's future work; the focus now is stabilizing the primitives.
Reference: github.com/hatmaxkit/hatmax