Skip to Content

Examples

Practical code examples and integration guides for Inferno AI across popular languages and frameworks.

Code Examples

Python Integration

Complete Python examples using the OpenAI SDK and native HTTP requests.

Includes:

REST API Usage

Direct REST API examples with curl, httpie, and Postman.

Includes:

Streaming Responses

Implement real-time streaming in multiple languages.

Includes:

Docker Deployment

Complete Docker examples for development and production.

Includes:


By Language

Python

JavaScript/TypeScript

Rust

Other Languages


By Use Case

Chat Applications

Code Generation

Data Processing


Integration Examples

Web Frameworks

CLI Applications


Quick Examples

Python - Chat Completion

from openai import OpenAI
 
client = OpenAI(base_url="http://localhost:8080/v1", api_key="not-needed")
 
response = client.chat.completions.create(
    model="llama-2-7b-chat",
    messages=[{"role": "user", "content": "Hello!"}]
)
 
print(response.choices[0].message.content)

JavaScript - Streaming

import OpenAI from 'openai';
 
const client = new OpenAI({
  baseURL: 'http://localhost:8080/v1',
  apiKey: 'not-needed'
});
 
const stream = await client.chat.completions.create({
  model: 'llama-2-7b-chat',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true
});
 
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Curl - REST API

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-2-7b-chat",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Next Steps