We handle Docker containers, security, and uptime. You bring your API key and connect your messaging app. Live in under a minute.
Your own isolated Docker instance running 24/7. Not shared with anyone.
Claude, GPT, Gemini — use whatever provider you want with your own API key.
Connect to the messaging platforms you already use. More channels coming.
Molt remembers context across conversations. Storage persists across restarts.
If your instance crashes, it comes back automatically. No manual intervention.
Non-root execution, capability drops, network isolation, resource limits. By default.
Choose Claude, GPT, or Gemini. Paste your API key from the provider's dashboard. You pay them directly for usage.
Create a Telegram or Discord bot — we walk you through every step inline. Paste the bot token.
We spin up a dedicated Molt container with your configuration. Your bot comes online in seconds. Message it and start using it.
< 1 min
Pick model, connect channel, deploy.
No terminal. No Docker. No maintenance.
Competitors charge $30–40/mo
+ your AI provider API costs
Less than a DigitalOcean droplet — without the setup.
Trusting a hosting service with your API keys is a serious decision. Here is exactly how we earn that trust.
Injected as environment variables at deploy time. Never stored in any database, never logged.
Conversations flow directly between your app, your container, and your AI provider.
Molt is fully auditable on GitHub. You are not trusting a black box.
Separate filesystem, network namespace, and process tree per user.
Non-root execution, all Linux capabilities dropped, no privileged mode.
Kernel-enforced caps on RAM, CPU, disk, and process count per container.
No. Your API key is injected as an environment variable directly into your isolated container at deploy time. It is never stored in our database, never logged, and never visible to our team.
No. Messages flow directly between your messaging app, your Molt container, and your AI provider. We do not proxy, intercept, or log any message traffic.
Every container runs in its own isolated network namespace with no inter-container communication. Containers have separate filesystems, separate process trees, and separate network stacks.
Each container has kernel-enforced limits on RAM, CPU, process count, and disk space. These are hard caps, not soft limits.
Containers run with all unnecessary Linux capabilities dropped, no privileged mode, no host network access, and no Docker socket exposure. Same isolation model as AWS and GCP.
Your container runs until the end of your billing cycle. After that, the container and all storage volumes are permanently destroyed within 48 hours.
Yes. Molt is open source and fully auditable on GitHub. The code your assistant runs is transparent.
Molt is an open-source AI agent framework. It connects AI models like Claude and GPT to messaging apps like Telegram, Discord, and WhatsApp — creating a persistent assistant that can manage tasks, browse the web, handle files, and automate workflows through natural conversation.
Molt uses an AI model to think and respond. By using your own API key, you pay the AI provider directly for only what you use — and your conversations never pass through us. Typical usage costs $10-30/month depending on how active you are.
The hosting infrastructure: a dedicated Docker container running 24/7, persistent storage for Molt's memory and workspace, automatic restarts, and security hardening. AI model costs are separate and paid directly to your provider.
We handle server provisioning, Docker configuration, security hardening, uptime monitoring, and automatic restarts. You skip the entire DevOps process and go straight to using your assistant.
Yes. You can update your model provider or API key from the dashboard at any time. Your assistant's memory and conversation history are preserved.
Yes. No contracts, no cancellation fees. Cancel from your dashboard and your instance runs until the end of your billing period.
Your Molt instance is one minute away.
Get Started — $19/mo