docs

LLM Node

The LLM node calls AI language models to analyze data, generate insights, and make decisions within your workflow. It supports 9 providers and 30+ models — from fast summarizers to deep reasoning engines.

Market Analyst
Claude Sonnet 4.6

Configuration

Model
Prompts
Advanced
0.7

Configuration

FieldDescription
ModelSelect from the dropdown, grouped by provider. Default: Claude Sonnet 4.6.
API CredentialsNickAI Credits (default — works with all models) or your own API key for the selected provider.
System PromptDefine the AI's role and behavior. Shapes how the model responds to every request.
User PromptThe main instruction for this execution. Use {{edge_label.field}} to inject data from connected nodes.
TemperatureControls randomness. 0 = deterministic, 0.7 = balanced (default), 2.0 = maximum creativity.
Max TokensMaximum response length. Default: 4000. Range: 1–8192.
TimeoutMaximum wait time in seconds. Default: 60. Range: 1–300.

Available Models

ProviderModelBest for
AnthropicClaude Sonnet 4.6 (default)Complex analysis, structured output
Claude Opus 4.6Hardest tasks, 1M context
Claude Sonnet 4.5Flagship general-purpose
Claude Opus 4.5Deep complex reasoning
Claude Sonnet 4Extended thinking / reasoning
Claude Haiku 4.5Fast and cost-effective
OpenAIGPT-5.2Latest flagship
GPT-5Complex analysis
GPT-5 MiniEfficient general-purpose
GPT-4oMultimodal / chart analysis
GPT-4o MiniFast, low-cost
GoogleGemini 3 ProFlagship multimodal
Gemini 2.5 FlashFast multimodal / vision
Gemini 2.5 Flash LiteUltra-fast inference
Gemini 2.5 ProAdvanced reasoning
xAIGrok 4Flagship
Grok 4 FastUltra-fast
Grok 3General-purpose
Grok 3 MiniLightweight
Grok Code FastCode generation
DeepSeekDeepSeek ChatConversational
DeepSeek ReasonerDeep reasoning
QwenQwen 2.5 72BLarge-scale analysis
Qwen Coder 32BCode generation
PerplexitySonar ProResearch with web search
Sonar ReasoningDeep reasoning + search
SonarFast search with citations
KimiKimi K2.5Visual coding, multimodal
Kimi K2 ThinkingLong-horizon reasoning
Kimi K2General-purpose
MiniMaxMiniMax M2.5Real-world productivity
MiniMax M2.1Coding, agentic workflows
MiniMax M2Compact, high-efficiency

Prompt Interpolation

Use double curly braces to inject live data from upstream nodes into your prompts.

ExpressionWhat it resolves to
{{price_data.data.prices[0].current}}Current price from a Price Data node
{{price_data.data.prices[0].indicators.rsi}}RSI value
{{portfolio.positions}}Full positions array from a Portfolio node
{{my_function.signal}}A specific field from a Function node

Example: Market Analysis Prompt

System Prompt:

You are a crypto market analyst. Analyze the provided price data
and technical indicators.

Respond in this exact format:
ACTION: [BUY / SELL / HOLD]
CONFIDENCE: [0-100]%
RATIONALE: [2-3 sentence explanation]

Be conservative — only recommend BUY when multiple indicators align.

User Prompt:

Analyze BTC/USD right now.

Current price: {{price_data.data.prices[0].current}}
24h change: {{price_data.data.prices[0].changePercent24h}}%
RSI: {{price_data.data.prices[0].indicators.rsi}}

Based on these indicators, what is your recommendation?

Structured Output

Toggle Structured Output to force the model to return a specific JSON schema instead of free-form text. This is useful when you need to feed parsed data directly into Conditional or Function nodes without extra parsing.

Define fields with a name, type (string, number, boolean, array, or object), and whether they're required. Click the type badge to change it. Expand objects and arrays to add nested properties. Try it below — the output preview updates in real time.

Structured Output

Define the shape of the LLM's response

LLM will return JSON matching this schema. Temperature forced to 0.

Output preview
{
  "signal": "buy",
  "confidence": 0.85,
  "reasoning": "RSI below 30 indicates oversold"
}

The demo above is pre-populated with a trading signal schema. In your workflow, the LLM will return JSON matching your schema exactly — you can then route it directly into a Conditional node (e.g., check if signal equals "buy" and confidence is greater than 0.7).


Visual Analysis

Connect a Chart Image node to the LLM to enable visual chart analysis. Vision-capable models (GPT-4o, Gemini, Kimi K2.5) can analyze candlestick patterns, support/resistance levels, and trend direction directly from the chart image.

BTC Chart
BINANCE:BTCUSDT
Analyze Chart
GPT-4o
Buy Signal?
signal = buy
Place Order
Buy BTC

The LLM automatically detects images in the interpolated data — no special configuration needed.


Parsing LLM Output Downstream

The LLM returns a text string by default. To use it in decisions:

  • Simple routing: Connect LLM → Conditional. Set Field to llm.output, Operator to "Contains", Value to BUY. The true branch triggers the trade, false branch sends a notification.

  • Structured parsing: Enable Structured Output on the LLM node itself, or connect LLM → Function node that parses the text into JSON → Conditional on the parsed fields.

  • Multi-model consensus: Run the same data through multiple LLM nodes in parallel, then merge results in a Function node to vote on the final action.

BTC Price
Claude Analysis
Claude Sonnet 4.6
Contains BUY?
1 rule
Buy BTC
Alert: No Signal

Output

PathDescription
{llm.output}The model's response — plain text string or parsed JSON object (when structured output is enabled)
{llm.citations}Array of web search citations (Perplexity models only)

Next Steps