With the Chat API’smemory option, the SDK handles context retrieval, prompt construction, and LLM calls in one step. No need to build a RAG pipeline yourself.
import { NdxClient } from '@neuradex/sdk';const client = new NdxClient({ apiKey: process.env.NEURADEX_API_KEY, projectId: process.env.NEURADEX_PROJECT_ID,});async function ragAnswer(question: string): Promise<string> { const stream = client.chat.create({ model: 'gpt-4o', messages: [ { role: 'system', content: 'Answer the question based on the following information. If information is insufficient, say so honestly.', }, { role: 'user', content: question }, ], memory: { enabled: true, maxTokens: 3000, includeEpisodes: true, }, }); return await stream.text;}
Chat API internally calls Memory API’sgetContext() and auto-injects the result into the system message. The code above achieves the same result as the manual RAG pipeline below.
A typical pattern for implementing RAG (Retrieval Augmented Generation) with an external LLM. This uses the Memory API for context retrieval and the Episodes API for recording Q&A history.
import { NdxClient } from '@neuradex/sdk';import OpenAI from 'openai';const neuradex = new NdxClient({ apiKey: process.env.NEURADEX_API_KEY, projectId: process.env.NEURADEX_PROJECT_ID,});const openai = new OpenAI();async function ragAnswer(question: string): Promise<string> { // 1. Get context const context = await neuradex.memory.getContext(question, { tokenBudget: 3000, includeEpisodes: true, maxDepth: 2, }); // 2. Generate answer with LLM const response = await openai.chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'system', content: `Answer the question based on the following information.If information is insufficient, say so honestly.${context.formatted}`, }, { role: 'user', content: question }, ], }); const answer = response.choices[0].message.content ?? ''; // 3. Record Q&A as episodes (for learning) const questionEpisode = await neuradex.episodes.create({ actorType: 'user', episodeType: 'question', content: question, scopeType: 'project', scopeId: process.env.NEURADEX_PROJECT_ID, channel: 'api', }); await neuradex.episodes.create({ actorType: 'agent', episodeType: 'answer', content: answer, scopeType: 'project', scopeId: process.env.NEURADEX_PROJECT_ID, channel: 'api', parentEpisodeId: questionEpisode.id, }); return answer;}