Introduction
=============
## azllm
`azllm` is a Python package designed to interface with various large language models (LLMs) from different AI providers. It offers a unified interface for interacting with models from providers like **OpenAI**, **DeepSeek**, **Grok**, **Gemini**, **Meta's Llama**, **Anthropic**, **Ollama**, and others. The package allows for customizable configurations, batch generation, parallel generation, error handling, and the ability to parse structured responses from different models.
## Features
- **Unified Client Interface**: A single interface for interacting with various language models.
- **Customizable Parameters**: Easily configure parameters like `temperature`, `max_tokens`, and more for each model.
- **Batch Generation**: Generate responses for multiple prompts in a single function call.
- **Parallel Generation**: Generate text in parallel using different clients and models for the same prompt.
- **Parse Responses**: Handle different model responses with parsing support for those models that support it.
- **Error Handling**: Graceful error handling with informative messages in case of failures.
- **Lazy Client Initialization**: Clients are initialized only when needed to optimize performance.
- **Environment Configuration**: API keys and other secrets are managed via `.env` files.
## Supported Clients
- OpenAI
- DeepSeek
- Grok
- Anthropic
- Fireworks for Meta's LLaMA and others.
- Google's Gemini
- Ollama
**NOTE:** If you would like to request support for additional LLMs, please open a new issue on our GitHub page.