Ship AI features in client apps with server-side control
Wavelike gives native and web apps a hosted gateway for model routing, app-user auth, quota policies, and spend attribution, so provider keys stay off the client and usage stays under control.
Private beta access is rolling out to teams building client-side AI features.
- Chat, Responses + Images
- OpenAI-compatible
- Auth, quotas + attribution
- Per-user controls
- OpenAI, Anthropic + more
- Provider routing
Client integration
import Wavelike Wavelike.set(environment: .production)Wavelike.set(appId: "app_live_...")Wavelike.set(auth: .backend(userId: user.id, key: userKey)) let model = Wavelike.model( for: ModelIdentifier(ModelClient.self, id: "support-chat")) let response = try await model.sendText(history: [ Message(role: .user, content: "Draft a weekly reply")])Responses
Streaming ready
Quotas
Per user
Models
Multi-provider
Routes requests to providers and API surfaces your apps already use
Built for developers
The missing control plane between your app and every model.
Wavelike sits between your app and model providers with the security, spend controls, and operational visibility client-side AI needs in production.
Gateway
- Expose one hosted API for client AI features while Wavelike handles provider credentials, request validation, model resolution, and normalized responses.
Identity
- Gate requests with dashboard-managed auth strategies: API keys, backend-issued user keys, OAuth / SSO, Apple DeviceCheck, or Google Play Integrity.
Spend Intelligence
- Track cost, tokens, requests, trends, and composition by app, user, stack, and model so product teams can see what is driving spend.
Quota Policies
- Layer policies for cost, tokens, requests, or weighted request units across all users, auth strategies, API keys, individual users, or model stacks.
Model Stacks
- Configure named stacks, pinned versions, system behavior, slot roles, and provider fallbacks from the dashboard without shipping a client update.
SDKs
- Use the iOS SDK today, OpenAI-compatible HTTP endpoints for web or backend calls, and planned JS and Android subtree mirrors as they mature.
Every request passes through policy
Requests pass through strategy validation, app-user resolution, model resolution, quota checks, provider selection, error normalization, and usage recording before the response returns to the client.
Integrate without a backend rewrite
The API exposes OpenAI-style chat completions, Responses, images, model listing, and Anthropic messages routes, so teams can integrate Wavelike without rebuilding every client call.
Private beta access
Join the private beta for controlled AI in client apps.
Get early access to the dashboard, API gateway, iOS SDK, auth strategies, quota policies, and per-user spend analytics.
Join the waitlist and we'll follow up with private beta access.