Programs
Programs
A Program in Synalinks is the fundamental unit of deployment and training. Just as a function encapsulates logic in traditional programming, a Program encapsulates the entire computation graph of your Language Model application, from input to output, including all intermediate transformations.
Why Programs Matter
In traditional LLM development, you write procedural code that calls APIs:
graph LR
subgraph Traditional Approach
A[Function] --> B[API Call 1]
B --> C[Parse]
C --> D[API Call 2]
D --> E[Return]
end
This approach has limitations: no training, no serialization, no visualization.
Synalinks Programs provide a declarative computation graph:
graph LR
subgraph Synalinks Program
A[Input DataModel] --> B[Module 1]
B --> C[Module 2]
C --> D[Output DataModel]
end
E[Training] -.-> B
E -.-> C
F[Save/Load] -.-> B
F -.-> C
Programs provide:
- Trainability: Optimize instructions and examples over time
- Serialization: Save and load trained state
- Visualization: Understand your computation graph
- Composability: Nest programs within programs
The Four Program Creation Strategies
Synalinks offers four distinct strategies for creating programs, each suited to different use cases:
graph TD
A[Program Creation] --> B[Functional API]
A --> C[Subclassing API]
A --> D[Sequential API]
A --> E[Mixing Strategy]
B --> F["Most Flexible<br>(Recommended)"]
C --> G["Custom Logic<br>in call()"]
D --> H["Simple Linear<br>Pipelines"]
E --> I["Reusable<br>Components"]
Strategy 1: The Functional API (Recommended)
The Functional API is the most powerful and flexible approach. You build
a computation graph by chaining module calls, starting from an Input node:
import asyncio
from dotenv import load_dotenv
import synalinks
class Query(synalinks.DataModel):
"""User question."""
query: str = synalinks.Field(description="User question")
class Answer(synalinks.DataModel):
"""Answer with reasoning."""
thinking: str = synalinks.Field(description="Step by step thinking")
answer: str = synalinks.Field(description="The final answer")
async def main():
load_dotenv()
synalinks.clear_session()
lm = synalinks.LanguageModel(model="openai/gpt-4.1-mini")
# Step 1: Define the entry point
inputs = synalinks.Input(data_model=Query)
# Step 2: Chain module calls (this builds the graph)
outputs = await synalinks.Generator(
data_model=Answer,
language_model=lm,
)(inputs)
# Step 3: Wrap in a Program
program = synalinks.Program(
inputs=inputs,
outputs=outputs,
name="qa_program",
)
# Step 4: Use the program
result = await program(Query(query="What is 2+2?"))
print(f"Answer: {result['answer']}")
if __name__ == "__main__":
asyncio.run(main())
The Functional API excels at:
- Parallel branches: Multiple modules can process the same input
- Complex routing: Decisions and branches based on content
- Merging: Combining outputs from multiple paths
Strategy 2: The Subclassing API
The Subclassing API gives you complete control over the execution logic.
You inherit from synalinks.Program and override the call() method:
import synalinks
class Query(synalinks.DataModel):
query: str = synalinks.Field(description="User question")
class Answer(synalinks.DataModel):
answer: str = synalinks.Field(description="The final answer")
class QAProgram(synalinks.Program):
"""A custom QA program using subclassing."""
def __init__(self, language_model, **kwargs):
super().__init__(**kwargs)
self.language_model = language_model
# Create modules in __init__
self.generator = synalinks.Generator(
data_model=Answer,
language_model=language_model,
)
async def call(
self,
inputs: synalinks.JsonDataModel,
training: bool = False,
) -> synalinks.JsonDataModel:
# Custom logic here
return await self.generator(inputs, training=training)
Use the Subclassing API when you need:
- Custom logic that doesn't fit the functional paradigm
- State management beyond trainable variables
- Integration with external systems during execution
Strategy 3: The Sequential API
The Sequential API is the simplest approach for linear pipelines where each module feeds directly into the next:
graph LR
A[Input] --> B[Module 1]
B --> C[Module 2]
C --> D[Module 3]
D --> E[Output]
import synalinks
class Query(synalinks.DataModel):
query: str = synalinks.Field(description="User question")
class Thinking(synalinks.DataModel):
thinking: str = synalinks.Field(description="Step by step thinking")
class Answer(synalinks.DataModel):
answer: str = synalinks.Field(description="The final answer")
lm = synalinks.LanguageModel(model="openai/gpt-4.1-mini")
# Simple linear pipeline using .add() method
program = synalinks.Sequential(
name="sequential_qa",
description="A sequential question-answering pipeline",
)
program.add(synalinks.Input(data_model=Query))
program.add(synalinks.Generator(data_model=Thinking, language_model=lm))
program.add(synalinks.Generator(data_model=Answer, language_model=lm))
The Sequential API is ideal for:
- Simple, linear processing pipelines
- Quick prototyping
- When each step naturally flows to the next
Strategy 4: The Mixing Strategy
The Mixing Strategy combines subclassing with the Functional API to create reusable components that can be used inside other programs:
graph TD
subgraph Reusable Component
A[build] --> B[Create Functional Graph]
B --> C[Reinitialize as Program]
end
subgraph Main Program
D[Input] --> E[Component]
E --> F[More Processing]
F --> G[Output]
end
import synalinks
class ChainOfThought(synalinks.Program):
"""Reusable chain-of-thought component."""
def __init__(self, language_model, **kwargs):
super().__init__(**kwargs)
self.language_model = language_model
async def build(self, inputs: synalinks.SymbolicDataModel) -> None:
"""Build the computation graph when first called."""
outputs = await synalinks.Generator(
data_model=AnswerWithThinking,
language_model=self.language_model,
)(inputs)
# Reinitialize with the built graph
super().__init__(
inputs=inputs,
outputs=outputs,
name=self.name,
)
The Mixing Strategy is powerful for:
- Creating library components
- Encapsulating complex sub-graphs
- Building a toolkit of reusable patterns
Program Features
Saving and Loading
Programs serialize their entire state to JSON:
# Save a program
program.save("my_program.json")
# Load a program
loaded = synalinks.Program.load("my_program.json")
This includes all trainable variables (optimized instructions and examples).
Program Summary
Inspect your program's structure:
Output:
Program: qa_program
===============================
| Module | Trainable |
|-----------------|-----------|
| Input | No |
| Generator | Yes |
===============================
Total parameters: 2
Trainable parameters: 2
Batch Inference
Process multiple inputs efficiently:
Key Takeaways
-
Functional API: The recommended approach for most use cases. Build computation graphs by chaining module calls from
Inputto outputs. Supports parallel branches, decisions, and complex routing. -
Subclassing API: Use when you need custom logic in the
call()method. Gives you complete control but loses some declarative benefits. -
Sequential API: Perfect for simple linear pipelines where modules feed directly into each other. Minimal boilerplate.
-
Mixing Strategy: Create reusable components that can be embedded in other programs. Best for building a library of patterns.
-
Serialization: All programs can be saved to JSON and loaded back, preserving trained state and configuration.
-
Program.summary(): Use this to inspect your program's structure and identify trainable modules.
API References
Answer
AnswerWithThinking
Bases: DataModel
Answer with reasoning.
Source code in guides/3_programs.py
ChainOfThought
Bases: Program
Reusable chain-of-thought component.
Source code in guides/3_programs.py
QAProgram
Bases: Program
A QA program using subclassing.