GSoC 2026: Project 13 - No-Code AI Workflow Automation with n8n and OpenVINO Model Server for GPU/NPU #34422
Replies: 7 comments 4 replies
-
|
Hi Nand Kishore, Can you please prioritize a full agentic workflow using n8n + OVMS locally, with a model that supports agent-style interactions and function/tool calling? I think that would help align strongly with the intended direction of the project. On your architecture questions: The Gateway approach seems reasonable for now, as long as it continues to work well for agentic/tool-calling workflows. As part of the next steps, it would be great if you could implement:
Please let me know if you got any further questions Thanks |
Beta Was this translation helpful? Give feedback.
-
|
Hi @praveenkk123, What's working: Custom n8n node that connects to OpenVINO Model Server
Repo: https://github.com/Nandkishore-04/n8n-openvino I'd appreciate any feedback on this, and would love your guidance on shaping a strong GSoC proposal for this project. |
Beta Was this translation helpful? Give feedback.
-
|
Hi Nand Kishore, Thanks |
Beta Was this translation helpful? Give feedback.
-
|
Lets keep your existing custom agent demo as its interesting. Please see if you can create an use case that combines both of these together in a single flow. If it is not feasible we can create a new workflow with AI agent Thanks |
Beta Was this translation helpful? Give feedback.
-
|
Hi @praveenkk123 , I kept the existing custom agent demo as is and created a combined workflow that uses both together in a single flow: Combined Workflow (/support-v2): Custom OVMS Node runs DistilBERT sentiment analysis (fast, deterministic)
So both the custom node and n8n's built-in AI Agent work together — the custom node handles classification, the AI Agent handles reasoning and tool orchestration. One limitation: Qwen2.5-1.5B is too small to reliably call tools through the built-in AI Agent. It tends to generate text describing the tool calls instead of executing them. The custom agent with the nudge system handles this correctly. I've documented this with a comparison table in the repo. A larger model (7B+) would resolve this. Everything is pushed with screenshots and docs: https://github.com/Nandkishore-04/n8n-openvino (please refer the combined workflow demo screenshots ) Let me know if you'd like me to take a different approach . |
Beta Was this translation helpful? Give feedback.
-
|
Hi Nand Kishore, This looks great. Let me get back to you on this on the next steps. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @praveenkk123, Thanks |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @praveenkk123, @mjdomeik
I'm Nand Kishore, a final year student pursuing my BTech CSE in (AI and Data Science), and project idea 13 really caught my attention. I’ve been exploring the OpenVINO GenAI codebase and currently have two active PRs under review. I have recently been working on building n8n workflows for Businesses and reducing the manual work.
My understanding of the project: Build a n8n community nodes that let non-technical users run AI inference on Intel hardware (CPU/GPU/NPU) through OVMS — selecting models, choosing devices, and running inference all from the n8n UI.
I built a working prototype to explore the integration approach,
ARCHITECTURE:
n8n (custom node) → Gateway (tokenizes text) → OVMS (runs inference) → Results back to n8n
IMPLEMENTATION DETAILS:
Repo: https://github.com/Nandkishore-04/n8n-openvino
OpenVINO GenAI Contributions
#3337 — Node.js N-API bindings for Text2VideoPipeline
#3155 — Introduced C API for Text-to-Speech pipeline and SpeechT5 (Fixes #2302)
Looking for Feedback On:
I’d appreciate feedback on whether this architecture aligns with the intended scope and where you see the highest priority.
regards,
Nand Kishore
Beta Was this translation helpful? Give feedback.
All reactions