Skip to content

Mixing Strategy

You've learned three ways to build programs:

  • Functional API (1a): Flexible graph building
  • Subclassing (1b): Full control but more boilerplate
  • Sequential (1c): Simplest for linear pipelines

Now, let's explore the recommended approach: mixing subclassing with the Functional API. This gives you the best of both worlds!

Why Use the Mixing Strategy?

Approach Encapsulation Boilerplate Flexibility
Functional API Low Low High
Subclassing High High High
Sequential Medium Very Low Low
Mixing High Low High

The mixing strategy provides:

  1. Encapsulation: Your program is a reusable class
  2. No boilerplate: No need for call(), get_config(), or from_config()
  3. Flexibility: Full power of the Functional API inside

The Pattern

# Step 1: Define a reusable module using the mixing strategy
class MyModule(synalinks.Program):

    def __init__(self, language_model=None, ...):
        super().__init__()  # Initialize without inputs/outputs
        self.language_model = language_model  # Store config

    async def build(self, inputs):
        # Use Functional API to create the graph
        # `inputs` is a SymbolicDataModel (from the outer program)
        outputs = await synalinks.Generator(...)(inputs)

        # Re-initialize with inputs and outputs
        super().__init__(
            inputs=inputs,
            outputs=outputs,
            name=self.name,
        )

# Step 2: Use the module inside a functional program
my_module = MyModule(language_model=lm)

inputs = synalinks.Input(data_model=Query)  # Creates symbolic input
outputs = await my_module(inputs)           # Triggers build() with symbolic input

program = synalinks.Program(inputs=inputs, outputs=outputs)

How It Works

sequenceDiagram
    participant User
    participant MyModule
    participant FunctionalAPI
    participant Graph

    User->>MyModule: Create instance with config
    User->>FunctionalAPI: inputs = Input(data_model)
    User->>MyModule: await my_module(inputs)
    MyModule->>MyModule: build(inputs) triggered
    MyModule->>FunctionalAPI: Create internal graph
    MyModule->>Graph: Re-initialize as Program
    Graph-->>User: outputs (SymbolicDataModel)
  1. Define your class: Implement __init__() and build() only
  2. Create an instance: Store configuration (language models, settings)
  3. Use in Functional API: Call the module with a symbolic Input
  4. build() is triggered: Receives symbolic data, creates the graph

The key insight: the mixing strategy creates reusable modules that you compose using the Functional API. The build() method receives symbolic inputs when called during graph construction.

Complete Example

import asyncio
from dotenv import load_dotenv
import synalinks

class Query(synalinks.DataModel):
    query: str = synalinks.Field(description="The user query")

class AnswerWithThinking(synalinks.DataModel):
    thinking: str = synalinks.Field(description="Your step by step thinking")
    answer: str = synalinks.Field(description="The correct answer")

class ChainOfThought(synalinks.Program):
    """Reusable module using the mixing strategy."""

    def __init__(self, language_model=None, name=None):
        super().__init__(name=name)
        self.language_model = language_model

    async def build(self, inputs):
        # Use Functional API inside build()
        outputs = await synalinks.Generator(
            data_model=AnswerWithThinking,
            language_model=self.language_model,
        )(inputs)

        # Re-initialize as a Functional program
        super().__init__(inputs=inputs, outputs=outputs, name=self.name)

async def main():
    load_dotenv()
    language_model = synalinks.LanguageModel(model="openai/gpt-4.1")

    # Use the mixed module in a functional program
    chain_of_thought = ChainOfThought(language_model=language_model)

    inputs = synalinks.Input(data_model=Query)
    outputs = await chain_of_thought(inputs)  # Triggers build()

    program = synalinks.Program(inputs=inputs, outputs=outputs)

    result = await program(Query(query="What is 15% of 80?"))
    print(f"Answer: {result['answer']}")

asyncio.run(main())

Key Takeaways

  • Mixing Strategy: Combine subclassing with the Functional API for the best of both worlds - encapsulation without boilerplate.
  • build() Method: Override build() to use the Functional API inside your class, receiving symbolic inputs during graph construction.
  • Automatic Serialization: No need for get_config() or from_config() when using the mixing strategy.
  • Reusable Components: Create library-quality modules that can be composed into larger programs.

Program Visualization

chain_of_thought

API References

AnswerWithThinking

Bases: DataModel

The output from our program - reasoning + final answer.

Source code in examples/1d_mixing_strategy.py
class AnswerWithThinking(synalinks.DataModel):
    """The output from our program - reasoning + final answer."""

    thinking: str = synalinks.Field(
        description="Your step by step thinking",
    )
    answer: str = synalinks.Field(
        description="The correct answer",
    )

ChainOfThought

Bases: Program

A program that answers questions with step-by-step reasoning.

This uses the MIXING STRATEGY: subclassing + Functional API. Notice how we DON'T implement call(), get_config(), or from_config()!

Source code in examples/1d_mixing_strategy.py
class ChainOfThought(synalinks.Program):
    """A program that answers questions with step-by-step reasoning.

    This uses the MIXING STRATEGY: subclassing + Functional API.
    Notice how we DON'T implement call(), get_config(), or from_config()!
    """

    def __init__(
        self,
        language_model=None,
        name=None,
        description=None,
        trainable=True,
    ):
        # Step 1: Initialize the base Program (without inputs/outputs yet)
        super().__init__(
            name=name,
            description=description,
            trainable=trainable,
        )

        # Step 2: Store configuration for later use in build()
        # These are NOT modules yet - just configuration!
        self.language_model = language_model

    async def build(self, inputs: synalinks.SymbolicDataModel) -> None:
        """Build the program graph using the Functional API.

        This method is called AUTOMATICALLY when the program is first used.
        You don't need to call it yourself!

        Args:
            inputs (SymbolicDataModel): A SymbolicDataModel representing
                    the input data model.
        """
        # Step 3: Use Functional API to create the computation graph
        # This is exactly like the Functional API (Lesson 1a)!
        outputs = await synalinks.Generator(
            data_model=AnswerWithThinking,
            language_model=self.language_model,
        )(inputs)

        # Step 4: Re-initialize as a Functional program
        # This tells Synalinks the complete graph structure
        super().__init__(
            inputs=inputs,
            outputs=outputs,
            name=self.name,
            description=self.description,
            trainable=self.trainable,
        )

build(inputs) async

Build the program graph using the Functional API.

This method is called AUTOMATICALLY when the program is first used. You don't need to call it yourself!

Parameters:

Name Type Description Default
inputs SymbolicDataModel

A SymbolicDataModel representing the input data model.

required
Source code in examples/1d_mixing_strategy.py
async def build(self, inputs: synalinks.SymbolicDataModel) -> None:
    """Build the program graph using the Functional API.

    This method is called AUTOMATICALLY when the program is first used.
    You don't need to call it yourself!

    Args:
        inputs (SymbolicDataModel): A SymbolicDataModel representing
                the input data model.
    """
    # Step 3: Use Functional API to create the computation graph
    # This is exactly like the Functional API (Lesson 1a)!
    outputs = await synalinks.Generator(
        data_model=AnswerWithThinking,
        language_model=self.language_model,
    )(inputs)

    # Step 4: Re-initialize as a Functional program
    # This tells Synalinks the complete graph structure
    super().__init__(
        inputs=inputs,
        outputs=outputs,
        name=self.name,
        description=self.description,
        trainable=self.trainable,
    )

Critique

Bases: DataModel

A critique of an answer.

Source code in examples/1d_mixing_strategy.py
class Critique(synalinks.DataModel):
    """A critique of an answer."""

    issues: str = synalinks.Field(
        description="Any issues or problems with the answer",
    )
    is_correct: bool = synalinks.Field(
        description="Whether the answer appears correct",
    )

Query

Bases: DataModel

The input to our program - a user's question.

Source code in examples/1d_mixing_strategy.py
class Query(synalinks.DataModel):
    """The input to our program - a user's question."""

    query: str = synalinks.Field(
        description="The user query",
    )

RefinedAnswer

Bases: DataModel

A refined answer after self-critique.

Source code in examples/1d_mixing_strategy.py
class RefinedAnswer(synalinks.DataModel):
    """A refined answer after self-critique."""

    original_answer: str = synalinks.Field(
        description="The original answer",
    )
    refinement: str = synalinks.Field(
        description="Any refinements or corrections",
    )
    final_answer: str = synalinks.Field(
        description="The final, refined answer",
    )

SelfCritiquingReasoner

Bases: Program

A more complex program: reason, critique, then refine.

This demonstrates building a multi-step pipeline with the mixing strategy.

Flow: Query -> Think+Answer -> Critique -> Refine

Source code in examples/1d_mixing_strategy.py
class SelfCritiquingReasoner(synalinks.Program):
    """A more complex program: reason, critique, then refine.

    This demonstrates building a multi-step pipeline with the mixing strategy.

    Flow: Query -> Think+Answer -> Critique -> Refine
    """

    def __init__(
        self,
        language_model=None,
        name=None,
        description=None,
        trainable=True,
    ):
        super().__init__(
            name=name,
            description=description,
            trainable=trainable,
        )
        self.language_model = language_model

    async def build(self, inputs: synalinks.SymbolicDataModel) -> None:
        """Build a multi-step reasoning pipeline."""

        # Step 1: Generate initial answer with thinking
        initial_answer = await synalinks.Generator(
            data_model=AnswerWithThinking,
            language_model=self.language_model,
            name="initial_reasoner",
        )(inputs)

        # Step 2: Critique the answer
        # Combine query + answer for context
        critique_input = inputs + initial_answer
        critique = await synalinks.Generator(
            data_model=Critique,
            language_model=self.language_model,
            name="self_critic",
        )(critique_input)

        # Step 3: Refine based on critique
        refine_input = critique_input + critique
        refined = await synalinks.Generator(
            data_model=RefinedAnswer,
            language_model=self.language_model,
            name="refiner",
        )(refine_input)

        # Initialize as Functional program
        super().__init__(
            inputs=inputs,
            outputs=refined,
            name=self.name,
            description=self.description,
            trainable=self.trainable,
        )

build(inputs) async

Build a multi-step reasoning pipeline.

Source code in examples/1d_mixing_strategy.py
async def build(self, inputs: synalinks.SymbolicDataModel) -> None:
    """Build a multi-step reasoning pipeline."""

    # Step 1: Generate initial answer with thinking
    initial_answer = await synalinks.Generator(
        data_model=AnswerWithThinking,
        language_model=self.language_model,
        name="initial_reasoner",
    )(inputs)

    # Step 2: Critique the answer
    # Combine query + answer for context
    critique_input = inputs + initial_answer
    critique = await synalinks.Generator(
        data_model=Critique,
        language_model=self.language_model,
        name="self_critic",
    )(critique_input)

    # Step 3: Refine based on critique
    refine_input = critique_input + critique
    refined = await synalinks.Generator(
        data_model=RefinedAnswer,
        language_model=self.language_model,
        name="refiner",
    )(refine_input)

    # Initialize as Functional program
    super().__init__(
        inputs=inputs,
        outputs=refined,
        name=self.name,
        description=self.description,
        trainable=self.trainable,
    )