ViewCasta
Streaming platform for operators

Every codec, every codec rung,
one Rust workspace.

ViewCasta is the 19-service streaming platform an operator can run on its own metal. VOD, live TV, smart ABR transcoding, AES-128 protection, row-level multi-tenancy, and Pingora-backed edge caching that lives inside the ISP's own POP. 114 thousand lines of Rust, AGPL-3.0.

114KLOC
Rust workspace
19
Microservices
21
DB migrations
AGPL3.0
License
smart_abr_ladder · HEVC
  • 4K UHD
    20,000 k
  • 4K
    12,000 k
  • 1080p
    8,000 k
  • 1080p
    5,000 k
  • 720p
    4,000 k
  • 720p
    2,000 k
  • 480p
    1,000 k
Source
3840x2160 HDR
Output
HLS · CMAF segments

Rungs above and at the source resolution, never above. The function signature is smart_abr_ladder(codec, width, height) in crates/services/transcoder/src/ffmpeg.rs.

What is ViewCasta

A complete streaming platform an operator can deploy as one product.

ViewCasta is the publicly branded name for the codebase that has lived inside KaritKarma as PlexBD. It is one Cargo workspace, 19 services across a platform plane (12 services) and a CDN plane (7 services), and roughly 114 thousand lines of Rust under the AGPL-3.0 licence.

It is built for the operator that wants the whole pipeline as a single product. Ingest through tus, smart ABR transcoding with a ladder that never upscales, HLS and DASH packaging, AES-128 protected delivery, ScyllaDB analytics, row-level multi-tenancy, profiles and continue-watching on the portal, and Pingora-backed cdn-edge nodes that live in the ISP's POP rather than in a foreign cloud.

Live at viewcasta.com.

ViewCastaPure RustAGPL-3.0Row-level multi-tenantISP edge CDNSmart ABR ladder

Architecture

Two planes. 19 Rust crates. One workspace.

The platform plane runs the business: catalog, subscribers, billing, analytics, ads. The CDN plane runs the bytes: origin, edge, GeoDNS, and pluggable content sources. Both planes ship in the same Cargo.toml.

Sync paths are gRPC over tonic. Async paths are NATS JetStream subjects like plexbd.transcode.complete. Client-facing surface is REST through the gateway, with WebSocket push for live admin dashboards.

Source: crates/services + crates/cdn

Platform plane · 12 services

  • Gatewayservices/gateway

    REST/gRPC bridge, tenant resolution, rate limit, WS push.

  • Catalogservices/catalog

    Content CMS, TMDB enrichment, search, genres, people, cast.

  • Transcoderservices/transcoder

    FFmpeg orchestration, smart ABR ladder, tokio::Semaphore bounded.

  • Streamerservices/streamer

    HLS and DASH serving, token validation, session tracking.

  • Key Serverservices/keyserver

    AES-128 content protection, token-based key delivery.

  • Subscriberservices/subscriber

    Users, plans, entitlements, profiles, watch history, favorites.

  • EPGservices/epg

    XMLTV and DVB-SI ingestion, schedule and catch-up window.

  • Ad Serverservices/adserver

    Server-Side Ad Insertion, campaign and creative management.

  • Analyticsservices/analytics

    Clickstream ingest, QoE metrics, viewing reports, dashboards.

  • Tenantservices/tenant

    Multi-tenant operator lifecycle, row-level isolation.

  • Schedulerservices/scheduler

    Distributed job queue, per-tenant cron, dead-letter.

  • Notifierservices/notifier

    Delivery hand-off to BitsPath for SMS, email, push, WhatsApp.

CDN plane · 7 crates

Pingora 0.8 inside
  • cdn-core
    cdn/cdn-core

    ContentSource trait, LRU cache, range request orchestration.

  • cdn-origin
    cdn/cdn-origin

    Origin server, signed URLs, segment and manifest service.

  • cdn-edge
    cdn/cdn-edge

    Pingora-backed re-streamer deployed inside ISP POPs.

  • cdn-api
    cdn/cdn-api

    Edge fleet management, cache control, pre-warm jobs.

  • cdn-analytics
    cdn/cdn-analytics

    Per-edge metrics, cache hit ratio, bandwidth attribution.

  • cdn-dns
    cdn/cdn-dns

    GeoDNS routing, subscriber-to-nearest-edge resolution.

  • cdn-sources
    cdn/cdn-sources/*

    Pluggable origin adapters: PlexBD, S3, FS, HTTP origin pull.

Three-entity CDN

The ISP brings the rack. The operator brings the brand. The bytes belong to whoever pays for them.

ViewCasta does not lock an edge to a single tenant. The ISP that hosts the rack, the operator that ships the catalog, and any third party that buys CDN-as-a-Service are three separate entities. A single edge serves all three simultaneously, with per-customer attribution flowing back to cdn-analytics.

EDGE01 / 03

Edge Host

The ISP

Provides the rack space, power, and uplink inside its POP. ViewCasta ships the edge software; the operator never sees the host's tenant data.

OP02 / 03

Platform Tenant

The operator

An ISP, telco, or cable company running ViewCasta white-label. Owns its catalog, subscribers, branding, and apps. Auto-enrolled as a CDN customer.

CDN03 / 03

CDN Customer

Anyone with bytes to ship

Third parties that buy ViewCasta CDN-as-a-Service. An edge cache serves every CDN customer at once, not locked to a single tenant.

subscriber>>cdn-dns (GeoDNS)>>cdn-edge (ISP POP, Pingora)>>cdn-origin / s3-source>>streamer + keyserver
cache hit >> served from edge LRUcache miss >> origin pull + pin

Data tier

Five engines. One tenancy model.

ViewCasta does not pretend a single database handles every workload. Catalog and subscribers live in Postgres with row-level isolation. Clickstream lives in ScyllaDB with tenant-prefixed partition keys to keep each partition bounded. Hot session state lives in Redis. Object bytes live in MinIO. Cross-service events ride NATS JetStream.

The plexbd-db crate enforces tenant-scoping at the query layer. The migration set has 21 files dated 2026-03 covering tenants, content, subscribers, CDN, ads, keys, transcode jobs, EPG, notifications, libraries, profiles, and the trailer pipeline.

EnginePattern
PostgreSQL 18.3
Catalog, subscribers, profiles, schedules, jobs, ads, EPG
Row-level tenancy
ScyllaDB 2026.1
Analytics events with tenant-prefixed partition keys
Shared keyspace
Redis 8.6
Sessions, JWKS cache, active profile, rate-limit counters
Sub-ms reads
MinIO (S3)
plexbd-thumbnails, plexbd-trailers, plexbd-segments buckets
AGPL throughout
NATS JetStream
plexbd.content.* / transcode.* / edge.health topics
Fire-and-forget

ViewCasta vs the stack you would otherwise stitch

Mux ships encoding. Bitmovin ships a player. AWS bills you for both. ViewCasta ships the whole pipeline as one product.

The honest comparison: ViewCasta is one product, source available under AGPL-3.0, runnable on the operator's own metal, with an ISP edge tier that none of the SaaS encoders attempt. The gap is multi-DRM (Widevine, FairPlay, PlayReady) which is wired into the model but adapter-pending. AES-128 token-protected delivery is the default today.

CapabilityViewCastaMuxBitmovinAWS Media*Wowza
End-to-end pipeline (ingest to player to billing)Single productEncoding + deliveryEncoder + playerStitched servicesSelf-hosted only
ISP edge cache deployed inside operator POPsCloudFront only
Smart ABR ladder, never upscalessmart_abr_ladder()Per-title encodingPer-title encodingMediaConvert presetsManual ladder
Multi-tenant by row, single deployment
Multi-DRM (Widevine, FairPlay, PlayReady)Roadmapped, AES-128 todaySPEKE
Pure Rust, zero garbage collectionJava
Source-available licenseAGPL-3.0Proprietary
Per-subscriber billing built inUsage onlyUsage onlyUsage onlyBYO billing
Operator-branded apps for TV and mobileWhite-labelBYO playerPlayer SDKBYO entire app
* AWS MediaLive + MediaConvert + CloudFront, stitched. Not a single product. Comparison reflects publicly documented capabilities at time of writing.

KaritKarma platform

Identity, authorization, and comms come from the platform. ViewCasta keeps shipping pixels.

ViewCasta does not reinvent auth, RBAC, or notification fan-out. It integrates with three KaritKarma platform services that the rest of the KaritKarma product family also uses, so an operator that runs ViewCasta plus Wenme gets the same passkey-only sign-in across every surface in the suite.

Wenme

Wenme

Identity

OAuth 2.1 + PKCE, WebAuthn passkeys, JWKS rotation every 5 minutes via Arc<RwLock>. Wired into 15 gateway handlers.

Darwan

Darwan

Authorization

PBAC/RBAC decisions for operators, content managers, and support staff. Per-role API guards in the gateway.

BitsPath

BitsPath

Communications

Outbound delivery for new-content alerts, billing, and ops alarms. Notifier hands off; BitsPath fans out.

Launch path

Five steps from new tenant to live subscribers.

No on-prem rebuild. A new operator is provisioned into the same shared ViewCasta deployment as every other tenant, owns its catalog and apps, and rolls out edge nodes into its own POPs at its own pace.

  1. Provision the operator tenant

    01

    Tenant service creates the row, default plans, content library, and Darwan role catalog. The operator becomes tenant #N in the same shared deployment.

  2. Wire identity through Wenme

    02

    Subscribers sign in with passkeys (no passwords). The gateway validates with a rotating JWKS set and maps the Wenme UUID to a local subscriber ID on first login.

  3. Ingest the catalog

    03

    Upload titles via tus resumable upload. Catalog enriches with TMDB metadata (poster, backdrop, cast, bios). The transcoder schedules smart ABR jobs bounded by a tokio::Semaphore.

  4. Deploy ISP edges

    04

    Run the cdn-edge crate on hardware inside the POP. cdn-dns routes nearby subscribers; cdn-analytics reports cache hit ratio per edge back to the operator.

  5. Ship the apps

    05

    Operator-branded portals and TV/mobile apps roll out under the operator's name. Notifier dispatches launches through BitsPath. Billing rolls out through Subscriber + LoneSock Pay.

Engineering questions

The eight questions a streaming buyer always asks.

What is ViewCasta?

ViewCasta is a 19-service streaming platform written in pure Rust (114,503 lines, AGPL-3.0) that operators can deploy as VOD, live TV, or CDN-as-a-Service. It ships the entire pipeline as one product: ingest, smart ABR transcoding, packaging, origin and ISP edge CDN, AES-128 protection, multi-tenant catalog, subscriber and profile management, server-side ad insertion, and an analytics tier built on ScyllaDB. ViewCasta is the publicly branded name for the codebase that has lived as PlexBD internally.

Does ViewCasta support HLS and DASH out of the box?

Yes. The streamer service serves HLS manifests today at /hls/{content_id}/master.m3u8 with per-rung playlists (4k, 1080p, 720p, 480p) and segment URLs proxied through the gateway media route. CMAF packaging for DASH parity rides through the same packager. The portal player uses hls.js with auth headers stripped for /api/media/ paths so segments cache cleanly through the edge.

Can I run ViewCasta on-prem?

Yes. ViewCasta ships as a Cargo workspace and a Docker Compose deployment (migrating to Kubernetes). Operators can run the full 19-service stack in their own data centre or as a managed SaaS tenant. The CDN edge specifically is designed to live inside an ISP POP, Pingora-backed re-streaming, GeoDNS routing, per-edge analytics, without ever phoning home to a central origin for cache hits.

What is the concurrent-viewer ceiling?

The platform is horizontally scalable at every plane. The gateway is stateless behind a load balancer, the catalog and subscriber services scale per CPU, the streamer is bandwidth-bound and runs N replicas behind the edge tier, and the edge tier itself scales by adding cdn-edge nodes inside ISP POPs. The transcoder is the only intentionally rate-limited service: a tokio::Semaphore bounds concurrent FFmpeg jobs (default 2, production tuned to 1 on 4GB memory boxes) to avoid OOM under unbounded ingest bursts.

Does ViewCasta have its own player?

Yes. The portal at web/portal ships a Next.js 16 player with hls.js, hero ABR autoplay, hover preview clips, multi-bitrate switching, Netflix-style profile selector (5 per user, kids mode), continue-watching with red progress bar and per-card remove, watch history, and a Person/Filmography graph driven from TMDB enrichment. The same portal runs operator-branded for white-label tenants.

How does ViewCasta handle multi-tenancy?

Postgres uses row-level isolation with a tenant_id column on every table, enforced inside the plexbd-db crate. ScyllaDB uses shared tables with tenant-prefixed partition keys (not per-tenant keyspaces) to keep operational complexity flat. The Tenant service owns tenant lifecycle; the gateway resolves the tenant from the Wenme JWT and pushes it into the request context for every downstream gRPC call.

How does the ISP edge CDN actually work?

The cdn-edge crate is a Pingora-based re-streamer that an ISP installs inside its POP. cdn-dns answers subscriber DNS queries with the nearest healthy edge IP. Hot segments are served from the LRU cache in cdn-core; cold segments are fetched from cdn-origin once and pinned. cdn-analytics emits per-edge cache hit ratio and bandwidth back to cdn-api. The model is three-entity (Edge Host, Platform Tenant, CDN Customer); an edge serves every CDN customer simultaneously, not just the tenant that hosts it.

What about content protection and DRM?

AES-128 token-protected delivery ships today through the keyserver service: per-content keys, signed manifest tokens, and short-lived session tokens validated at the streamer. Multi-DRM (Widevine, FairPlay, PlayReady) is wired into the catalog model and the gateway routes; the DRM gateway adapters are on the post-launch roadmap. For most operators, token-protected AES-128 plus signed URLs and device limits is enough to start.

One product. 19 Rust services. 7 ABR rungs. Your edge in the ISP's rack.

ViewCasta is the streaming platform an operator can actually deploy as one thing. Start a tenant in the shared deployment, ship the apps, drop cdn-edge nodes into your POPs as the subscriber count grows.