LinconwavesLinconwavesUnified docs
AI Workers

Llama 3.1 8B Instruct (Chat)

Chat/text generation via the Linconwaves AI Workers API.

Llama 3.1 8B Instruct (slug: llama-3p1-8b) is a chat model for assistants, reasoning, and content generation.

Endpoint

  • URL: POST /:modelSlug with modelSlug = llama-3p1-8b
  • Auth: Authorization: Bearer <api_key>
  • Content-Type: application/json
  • Base URL: https://aiworker.linconwaves.com

Request

OpenAI-style chat payload.

{
  "messages": [
    { "role": "user", "content": "Summarize edge AI in one paragraph." }
  ]
}

Notes:

  • Roles: user, assistant. Keep prompts concise for better latency.
  • Include minimal history if needed; large histories may be trimmed upstream.

Response

  • Typical: { "response": "<text>" } or OpenAI-style { "choices":[{ "message": { "content": "<text>" }}] }.
  • To extract text: prefer choices[i].message.content, fall back to response or text. Errors return JSON { "error": "...", "detail"?: "..." } with HTTP codes (400/401/499/500).

Curl example

curl -X POST https://aiworker.linconwaves.com/llama-3p1-8b \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  --data '{"messages": [{"role": "user", "content": "Summarize edge AI in one paragraph."}]}'

JavaScript (fetch)

const resp = await fetch('https://aiworker.linconwaves.com/llama-3p1-8b', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.API_KEY}`,
  },
  body: JSON.stringify({
  "messages": [
    { "role": "user", "content": "Summarize edge AI in one paragraph." }
  ]
}),
});

const data = await resp.json();
if (!resp.ok) {
  throw new Error(data.error || `Request failed (${resp.status})`);
}

const text =
  data.response ||
  data.text ||
  (Array.isArray(data.choices)
    ? data.choices.map((c) => c?.message?.content).find((t) => typeof t === "string")
    : undefined) ||
  "No response text found.";

Error codes

  • 401 Unauthorized — Missing/invalid API key.
  • 400 Bad Request — Invalid payload (e.g., missing messages).
  • 499 Client Closed Request — Request aborted by client.
  • 500 — Upstream model error or unexpected failure.

Backend snippets

const BASE = 'https://aiworker.linconwaves.com';
const API_KEY = process.env.AIWORKER_API_KEY!;

const payload = {
  "messages": [
    { "role": "user", "content": "Summarize edge AI in one paragraph." }
  ]
};

const res = await fetch(`${BASE}/llama-3p1-8b`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${API_KEY}` ,
  },
  body: JSON.stringify(payload),
});
const data = await res.json();
console.log(data);

Frontend snippets

// app/api/aiworker/route.ts
const BASE = 'https://aiworker.linconwaves.com';
const API_KEY = process.env.AIWORKER_API_KEY!;

export async function POST(req: Request) {
  const payload = await req.json();
  const res = await fetch(`${BASE}/llama-3p1-8b`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${API_KEY}` ,
    },
    body: JSON.stringify(payload),
  });
  return new Response(await res.text(), { status: res.status, headers: res.headers });
}
  • Playground: Dashboard → Playground → select “Llama 3.1 8B Instruct” and chat.