Skip to main content
LLM applications can be difficult to debug. Inputs are large, prompts are complex, and outputs are non-deterministic. Nuabase acts as a system of record for all your AI interactions.

Request Storage

Nuabase stores the full context of every request:
  1. Input Data: The raw JSON data you passed in.
  2. Prompt: The exact prompt template used.
  3. Output: The raw LLM response and the parsed JSON result.
  4. Metadata: Token usage, latency, model version, and cost.
This means you don’t need to log huge blobs of text in your own application database.

Retrieving Request Details

Every Nuabase response includes an llmRequestId. You can use this ID to fetch the full details of that execution later.
const result = await myFn(data);

if (result.isSuccess) {
  console.log("Request ID:", result.llmRequestId);
  
  // Later, or in a different admin tool:
  const details = await nua.getRequest(result.llmRequestId);
  console.log("Original Input:", details.input);
  console.log("Prompt Used:", details.prompt);
}

Usage in Development

During development, this is invaluable. You can:
  1. Run a function from your local machine.
  2. Go to the Nuabase Console.
  3. See exactly what was sent to the LLM and why it responded the way it did.

Compliance & Auditing

For enterprise use cases, having a complete audit trail of every AI decision is often a requirement. Nuabase provides this automatically without any extra engineering effort.