Workflow
Introduction
Workflow defines what an AI agent can do. It simplifies the development of AI agents by breaking down complex tasks into smaller, manageable steps (nodes). This method reduces reliance on prompt engineering and model inference skills, thereby increasing the efficiency of LLM applications for intricate tasks. Moreover, it enhances system clarity, robustness, and fault tolerance.
Key Concepts
1. Nodes
Nodes are the key components of a workflow. enabling the execution of a series of operations by connecting nodes with different functionalities.
2. Variables
Variables are used to link the input and output of nodes within a workflow, enabling complex processing logic throughout the process.
The output variables of nodes are system-fixed and cannot be edited.
Node names must be unique to avoid conflicts.
Only upstream node variables can be referenced in the process.
Nodes typically define input variables, e.g.,
sys.query
for a question classifier.Workflows need to specify execution start variables, like
sys.query
for a chatbot.
3. Edges
Edges must connect exactly two Nodes within the workflow. Edges indicate the flow from the source node to the target node, establishing the order of operations.
Supported Workflow
As an open protocol, our goal is not to create a new workflow data structure. Instead, we aim to create a unified platform that ensures maximum compatibility with existing workflow structures available on the market.
We will support the following workflow formats(only open-source):
ComfyUI
comfy
Dify.ai
dify
LangGraph
langgraph
Crewai.com
crewai
AutoGen Studio
autogen
Langflow.org
langflow
Fastgpt.in
fastgpt
You can also use open-source tools like LangChain to create custom workflow formats and integrate them into the AI721 protocol.
How to Run Workflows
Neuro Node runners use the engine service to provide unified API gateways and execute workflows on demand. You can also run a local node service to test the process.
Neuro Nodes can flexibly choose to call LLMs through the llm
field API, use specific LLM APIs defined in the workflow nodes, or opt for lower-cost custom local LLMs. Gonesis provides this flexibility to adapt to various scenarios.
Last updated