Every codec, every codec rung,
one Rust workspace.
ViewCasta is the 19-service streaming platform an operator can run on its own metal. VOD, live TV, smart ABR transcoding, AES-128 protection, row-level multi-tenancy, and Pingora-backed edge caching that lives inside the ISP's own POP. 114 thousand lines of Rust, AGPL-3.0.
- 4K UHD20,000 k
- 4K12,000 k
- 1080p8,000 k
- 1080p5,000 k
- 720p4,000 k
- 720p2,000 k
- 480p1,000 k
Rungs above and at the source resolution, never above. The function signature is smart_abr_ladder(codec, width, height) in crates/services/transcoder/src/ffmpeg.rs.
What is ViewCasta
A complete streaming platform an operator can deploy as one product.
ViewCasta is the publicly branded name for the codebase that has lived inside KaritKarma as PlexBD. It is one Cargo workspace, 19 services across a platform plane (12 services) and a CDN plane (7 services), and roughly 114 thousand lines of Rust under the AGPL-3.0 licence.
It is built for the operator that wants the whole pipeline as a single product. Ingest through tus, smart ABR transcoding with a ladder that never upscales, HLS and DASH packaging, AES-128 protected delivery, ScyllaDB analytics, row-level multi-tenancy, profiles and continue-watching on the portal, and Pingora-backed cdn-edge nodes that live in the ISP's POP rather than in a foreign cloud.
Live at viewcasta.com.
Architecture
Two planes. 19 Rust crates. One workspace.
The platform plane runs the business: catalog, subscribers, billing, analytics, ads. The CDN plane runs the bytes: origin, edge, GeoDNS, and pluggable content sources. Both planes ship in the same Cargo.toml.
Sync paths are gRPC over tonic. Async paths are NATS JetStream subjects like plexbd.transcode.complete. Client-facing surface is REST through the gateway, with WebSocket push for live admin dashboards.
crates/services + crates/cdnPlatform plane · 12 services
- Gateway
services/gatewayREST/gRPC bridge, tenant resolution, rate limit, WS push.
- Catalog
services/catalogContent CMS, TMDB enrichment, search, genres, people, cast.
- Transcoder
services/transcoderFFmpeg orchestration, smart ABR ladder, tokio::Semaphore bounded.
- Streamer
services/streamerHLS and DASH serving, token validation, session tracking.
- Key Server
services/keyserverAES-128 content protection, token-based key delivery.
- Subscriber
services/subscriberUsers, plans, entitlements, profiles, watch history, favorites.
- EPG
services/epgXMLTV and DVB-SI ingestion, schedule and catch-up window.
- Ad Server
services/adserverServer-Side Ad Insertion, campaign and creative management.
- Analytics
services/analyticsClickstream ingest, QoE metrics, viewing reports, dashboards.
- Tenant
services/tenantMulti-tenant operator lifecycle, row-level isolation.
- Scheduler
services/schedulerDistributed job queue, per-tenant cron, dead-letter.
- Notifier
services/notifierDelivery hand-off to BitsPath for SMS, email, push, WhatsApp.
CDN plane · 7 crates
- cdn-core
cdn/cdn-coreContentSource trait, LRU cache, range request orchestration.
- cdn-origin
cdn/cdn-originOrigin server, signed URLs, segment and manifest service.
- cdn-edge
cdn/cdn-edgePingora-backed re-streamer deployed inside ISP POPs.
- cdn-api
cdn/cdn-apiEdge fleet management, cache control, pre-warm jobs.
- cdn-analytics
cdn/cdn-analyticsPer-edge metrics, cache hit ratio, bandwidth attribution.
- cdn-dns
cdn/cdn-dnsGeoDNS routing, subscriber-to-nearest-edge resolution.
- cdn-sources
cdn/cdn-sources/*Pluggable origin adapters: PlexBD, S3, FS, HTTP origin pull.
Three-entity CDN
The ISP brings the rack. The operator brings the brand. The bytes belong to whoever pays for them.
ViewCasta does not lock an edge to a single tenant. The ISP that hosts the rack, the operator that ships the catalog, and any third party that buys CDN-as-a-Service are three separate entities. A single edge serves all three simultaneously, with per-customer attribution flowing back to cdn-analytics.
Edge Host
The ISP
Provides the rack space, power, and uplink inside its POP. ViewCasta ships the edge software; the operator never sees the host's tenant data.
Platform Tenant
The operator
An ISP, telco, or cable company running ViewCasta white-label. Owns its catalog, subscribers, branding, and apps. Auto-enrolled as a CDN customer.
CDN Customer
Anyone with bytes to ship
Third parties that buy ViewCasta CDN-as-a-Service. An edge cache serves every CDN customer at once, not locked to a single tenant.
Data tier
Five engines. One tenancy model.
ViewCasta does not pretend a single database handles every workload. Catalog and subscribers live in Postgres with row-level isolation. Clickstream lives in ScyllaDB with tenant-prefixed partition keys to keep each partition bounded. Hot session state lives in Redis. Object bytes live in MinIO. Cross-service events ride NATS JetStream.
The plexbd-db crate enforces tenant-scoping at the query layer. The migration set has 21 files dated 2026-03 covering tenants, content, subscribers, CDN, ads, keys, transcode jobs, EPG, notifications, libraries, profiles, and the trailer pipeline.
| Engine | Pattern |
|---|---|
PostgreSQL 18.3 Catalog, subscribers, profiles, schedules, jobs, ads, EPG | Row-level tenancy |
ScyllaDB 2026.1 Analytics events with tenant-prefixed partition keys | Shared keyspace |
Redis 8.6 Sessions, JWKS cache, active profile, rate-limit counters | Sub-ms reads |
MinIO (S3) plexbd-thumbnails, plexbd-trailers, plexbd-segments buckets | AGPL throughout |
NATS JetStream plexbd.content.* / transcode.* / edge.health topics | Fire-and-forget |
ViewCasta vs the stack you would otherwise stitch
Mux ships encoding. Bitmovin ships a player. AWS bills you for both. ViewCasta ships the whole pipeline as one product.
The honest comparison: ViewCasta is one product, source available under AGPL-3.0, runnable on the operator's own metal, with an ISP edge tier that none of the SaaS encoders attempt. The gap is multi-DRM (Widevine, FairPlay, PlayReady) which is wired into the model but adapter-pending. AES-128 token-protected delivery is the default today.
| Capability | ViewCasta | Mux | Bitmovin | AWS Media* | Wowza |
|---|---|---|---|---|---|
| End-to-end pipeline (ingest to player to billing) | Single product | Encoding + delivery | Encoder + player | Stitched services | Self-hosted only |
| ISP edge cache deployed inside operator POPs | CloudFront only | ||||
| Smart ABR ladder, never upscales | smart_abr_ladder() | Per-title encoding | Per-title encoding | MediaConvert presets | Manual ladder |
| Multi-tenant by row, single deployment | |||||
| Multi-DRM (Widevine, FairPlay, PlayReady) | Roadmapped, AES-128 today | SPEKE | |||
| Pure Rust, zero garbage collection | Java | ||||
| Source-available license | AGPL-3.0 | Proprietary | |||
| Per-subscriber billing built in | Usage only | Usage only | Usage only | BYO billing | |
| Operator-branded apps for TV and mobile | White-label | BYO player | Player SDK | BYO entire app | |
| * AWS MediaLive + MediaConvert + CloudFront, stitched. Not a single product. Comparison reflects publicly documented capabilities at time of writing. | |||||
KaritKarma platform
Identity, authorization, and comms come from the platform. ViewCasta keeps shipping pixels.
ViewCasta does not reinvent auth, RBAC, or notification fan-out. It integrates with three KaritKarma platform services that the rest of the KaritKarma product family also uses, so an operator that runs ViewCasta plus Wenme gets the same passkey-only sign-in across every surface in the suite.
Wenme
Identity
OAuth 2.1 + PKCE, WebAuthn passkeys, JWKS rotation every 5 minutes via Arc<RwLock>. Wired into 15 gateway handlers.
Darwan
Authorization
PBAC/RBAC decisions for operators, content managers, and support staff. Per-role API guards in the gateway.
BitsPath
Communications
Outbound delivery for new-content alerts, billing, and ops alarms. Notifier hands off; BitsPath fans out.
Launch path
Five steps from new tenant to live subscribers.
No on-prem rebuild. A new operator is provisioned into the same shared ViewCasta deployment as every other tenant, owns its catalog and apps, and rolls out edge nodes into its own POPs at its own pace.
Provision the operator tenant
01Tenant service creates the row, default plans, content library, and Darwan role catalog. The operator becomes tenant #N in the same shared deployment.
Wire identity through Wenme
02Subscribers sign in with passkeys (no passwords). The gateway validates with a rotating JWKS set and maps the Wenme UUID to a local subscriber ID on first login.
Ingest the catalog
03Upload titles via tus resumable upload. Catalog enriches with TMDB metadata (poster, backdrop, cast, bios). The transcoder schedules smart ABR jobs bounded by a tokio::Semaphore.
Deploy ISP edges
04Run the cdn-edge crate on hardware inside the POP. cdn-dns routes nearby subscribers; cdn-analytics reports cache hit ratio per edge back to the operator.
Ship the apps
05Operator-branded portals and TV/mobile apps roll out under the operator's name. Notifier dispatches launches through BitsPath. Billing rolls out through Subscriber + LoneSock Pay.
Engineering questions
The eight questions a streaming buyer always asks.
What is ViewCasta?
ViewCasta is a 19-service streaming platform written in pure Rust (114,503 lines, AGPL-3.0) that operators can deploy as VOD, live TV, or CDN-as-a-Service. It ships the entire pipeline as one product: ingest, smart ABR transcoding, packaging, origin and ISP edge CDN, AES-128 protection, multi-tenant catalog, subscriber and profile management, server-side ad insertion, and an analytics tier built on ScyllaDB. ViewCasta is the publicly branded name for the codebase that has lived as PlexBD internally.
Does ViewCasta support HLS and DASH out of the box?
Yes. The streamer service serves HLS manifests today at /hls/{content_id}/master.m3u8 with per-rung playlists (4k, 1080p, 720p, 480p) and segment URLs proxied through the gateway media route. CMAF packaging for DASH parity rides through the same packager. The portal player uses hls.js with auth headers stripped for /api/media/ paths so segments cache cleanly through the edge.
Can I run ViewCasta on-prem?
Yes. ViewCasta ships as a Cargo workspace and a Docker Compose deployment (migrating to Kubernetes). Operators can run the full 19-service stack in their own data centre or as a managed SaaS tenant. The CDN edge specifically is designed to live inside an ISP POP, Pingora-backed re-streaming, GeoDNS routing, per-edge analytics, without ever phoning home to a central origin for cache hits.
What is the concurrent-viewer ceiling?
The platform is horizontally scalable at every plane. The gateway is stateless behind a load balancer, the catalog and subscriber services scale per CPU, the streamer is bandwidth-bound and runs N replicas behind the edge tier, and the edge tier itself scales by adding cdn-edge nodes inside ISP POPs. The transcoder is the only intentionally rate-limited service: a tokio::Semaphore bounds concurrent FFmpeg jobs (default 2, production tuned to 1 on 4GB memory boxes) to avoid OOM under unbounded ingest bursts.
Does ViewCasta have its own player?
Yes. The portal at web/portal ships a Next.js 16 player with hls.js, hero ABR autoplay, hover preview clips, multi-bitrate switching, Netflix-style profile selector (5 per user, kids mode), continue-watching with red progress bar and per-card remove, watch history, and a Person/Filmography graph driven from TMDB enrichment. The same portal runs operator-branded for white-label tenants.
How does ViewCasta handle multi-tenancy?
Postgres uses row-level isolation with a tenant_id column on every table, enforced inside the plexbd-db crate. ScyllaDB uses shared tables with tenant-prefixed partition keys (not per-tenant keyspaces) to keep operational complexity flat. The Tenant service owns tenant lifecycle; the gateway resolves the tenant from the Wenme JWT and pushes it into the request context for every downstream gRPC call.
How does the ISP edge CDN actually work?
The cdn-edge crate is a Pingora-based re-streamer that an ISP installs inside its POP. cdn-dns answers subscriber DNS queries with the nearest healthy edge IP. Hot segments are served from the LRU cache in cdn-core; cold segments are fetched from cdn-origin once and pinned. cdn-analytics emits per-edge cache hit ratio and bandwidth back to cdn-api. The model is three-entity (Edge Host, Platform Tenant, CDN Customer); an edge serves every CDN customer simultaneously, not just the tenant that hosts it.
What about content protection and DRM?
AES-128 token-protected delivery ships today through the keyserver service: per-content keys, signed manifest tokens, and short-lived session tokens validated at the streamer. Multi-DRM (Widevine, FairPlay, PlayReady) is wired into the catalog model and the gateway routes; the DRM gateway adapters are on the post-launch roadmap. For most operators, token-protected AES-128 plus signed URLs and device limits is enough to start.
One product. 19 Rust services. 7 ABR rungs. Your edge in the ISP's rack.
ViewCasta is the streaming platform an operator can actually deploy as one thing. Start a tenant in the shared deployment, ship the apps, drop cdn-edge nodes into your POPs as the subscriber count grows.