Trusted and used by teams around the globe
Iterate, fast
Developing LLM apps takes countless iterations. With low code approach, we enable quick iterations to go from testing to production
- $ npm install -g flowise
- $ npx flowise start
LLM Orchestration
Connect LLMs with memory, data loaders, cache, moderation and many more
- Langchain
- LlamaIndex
- 100+ integrations
Agents & Assistants
Create autonomous agent that can uses tools to execute different tasks
- Custom Tools
- OpenAI Assistant
- Function Agent
- import requests
- url = "/api/v1/prediction/:id"
- def query(payload):
- response = requests.post(
- url,
- json = payload
- )
- return response.json()
- output = query({
- question: "hello!"
- )}
API, SDK, Embed
Extend and integrate to your applications using APIs, SDK and Embedded Chat
- APIs
- Embedded Widget
- React SDK
Open source LLMs
Run in air-gapped environment with local LLMs, embeddings and vector databases
- HuggingFace, Ollama, LocalAI, Replicate
- Llama2, Mistral, Vicuna, Orca, Llava
- Self host on AWS, Azure, GCP
Use Cases
One platform, endless possibilities. See some of the use cases
Product catalog chatbot to answer any questions related to the products
Pricing
Free 14 day trial. No credit card required.
Pro
For medium-sized businesses
$65/month
- Everything in Starter
- 50,000 Predictions / monththen $0.001 per prediction
- 10GB Storage
- Unlimited Workspaces
- Admin Roles & Permissions
- 3 Months Log Retention
- Priority Support
Community 🫶
Open source community is the heart of Flowise. See why developers love and build using Flowise
Webinars
Learn how to use Flowise from different webinar series with our partners
Enterprise
Looking for specific use cases and support?