Mixing Strategy
The Mixing Strategy (Recommended)
You've learned three ways to build programs:
- Functional API (1a): Flexible graph building
- Subclassing (1b): Full control but more boilerplate
- Sequential (1c): Simplest for linear pipelines
Now, let's explore the recommended approach: mixing subclassing with the Functional API. This gives you the best of both worlds!
Why Use the Mixing Strategy?
| Approach | Encapsulation | Boilerplate | Flexibility |
|---|---|---|---|
| Functional API | Low | Low | High |
| Subclassing | High | High | High |
| Sequential | Medium | Very Low | Low |
| Mixing | High | Low | High |
The mixing strategy provides:
- Encapsulation: Your program is a reusable class
- No boilerplate: No need for
call(),get_config(), orfrom_config() - Flexibility: Full power of the Functional API inside
The Pattern
# Step 1: Define a reusable module using the mixing strategy
class MyModule(synalinks.Program):
def __init__(self, language_model=None, ...):
super().__init__() # Initialize without inputs/outputs
self.language_model = language_model # Store config
async def build(self, inputs):
# Use Functional API to create the graph
# `inputs` is a SymbolicDataModel (from the outer program)
outputs = await synalinks.Generator(...)(inputs)
# Re-initialize with inputs and outputs
super().__init__(
inputs=inputs,
outputs=outputs,
name=self.name,
)
# Step 2: Use the module inside a functional program
my_module = MyModule(language_model=lm)
inputs = synalinks.Input(data_model=Query) # Creates symbolic input
outputs = await my_module(inputs) # Triggers build() with symbolic input
program = synalinks.Program(inputs=inputs, outputs=outputs)
How It Works
sequenceDiagram
participant User
participant MyModule
participant FunctionalAPI
participant Graph
User->>MyModule: Create instance with config
User->>FunctionalAPI: inputs = Input(data_model)
User->>MyModule: await my_module(inputs)
MyModule->>MyModule: build(inputs) triggered
MyModule->>FunctionalAPI: Create internal graph
MyModule->>Graph: Re-initialize as Program
Graph-->>User: outputs (SymbolicDataModel)
- Define your class: Implement
__init__()andbuild()only - Create an instance: Store configuration (language models, settings)
- Use in Functional API: Call the module with a symbolic
Input - build() is triggered: Receives symbolic data, creates the graph
The key insight: the mixing strategy creates reusable modules that you
compose using the Functional API. The build() method receives symbolic
inputs when called during graph construction.
Complete Example
import asyncio
from dotenv import load_dotenv
import synalinks
class Query(synalinks.DataModel):
query: str = synalinks.Field(description="The user query")
class AnswerWithThinking(synalinks.DataModel):
thinking: str = synalinks.Field(description="Your step by step thinking")
answer: str = synalinks.Field(description="The correct answer")
class ChainOfThought(synalinks.Program):
"""Reusable module using the mixing strategy."""
def __init__(self, language_model=None, name=None):
super().__init__(name=name)
self.language_model = language_model
async def build(self, inputs):
# Use Functional API inside build()
outputs = await synalinks.Generator(
data_model=AnswerWithThinking,
language_model=self.language_model,
)(inputs)
# Re-initialize as a Functional program
super().__init__(inputs=inputs, outputs=outputs, name=self.name)
async def main():
load_dotenv()
language_model = synalinks.LanguageModel(model="openai/gpt-4.1")
# Use the mixed module in a functional program
chain_of_thought = ChainOfThought(language_model=language_model)
inputs = synalinks.Input(data_model=Query)
outputs = await chain_of_thought(inputs) # Triggers build()
program = synalinks.Program(inputs=inputs, outputs=outputs)
result = await program(Query(query="What is 15% of 80?"))
print(f"Answer: {result['answer']}")
asyncio.run(main())
Key Takeaways
- Mixing Strategy: Combine subclassing with the Functional API for the best of both worlds - encapsulation without boilerplate.
- build() Method: Override
build()to use the Functional API inside your class, receiving symbolic inputs during graph construction. - Automatic Serialization: No need for
get_config()orfrom_config()when using the mixing strategy. - Reusable Components: Create library-quality modules that can be composed into larger programs.
Program Visualization
API References
AnswerWithThinking
Bases: DataModel
The output from our program - reasoning + final answer.
Source code in examples/1d_mixing_strategy.py
ChainOfThought
Bases: Program
A program that answers questions with step-by-step reasoning.
This uses the MIXING STRATEGY: subclassing + Functional API. Notice how we DON'T implement call(), get_config(), or from_config()!
Source code in examples/1d_mixing_strategy.py
build(inputs)
async
Build the program graph using the Functional API.
This method is called AUTOMATICALLY when the program is first used. You don't need to call it yourself!
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inputs
|
SymbolicDataModel
|
A SymbolicDataModel representing the input data model. |
required |
Source code in examples/1d_mixing_strategy.py
Critique
Bases: DataModel
A critique of an answer.
Source code in examples/1d_mixing_strategy.py
Query
RefinedAnswer
Bases: DataModel
A refined answer after self-critique.
Source code in examples/1d_mixing_strategy.py
SelfCritiquingReasoner
Bases: Program
A more complex program: reason, critique, then refine.
This demonstrates building a multi-step pipeline with the mixing strategy.
Flow: Query -> Think+Answer -> Critique -> Refine
Source code in examples/1d_mixing_strategy.py
build(inputs)
async
Build a multi-step reasoning pipeline.
