Skip to content

First Steps

Welcome to Synalinks! This lesson covers the essential concepts you need to understand before building AI applications.

Installation

# Using pip
pip install synalinks

# Or using uv (recommended)
uv pip install synalinks

Key Concepts

1. No Traditional Prompting

In Synalinks, you don't write prompts manually. Instead, you define:

  • Input Data Models: What data goes into your program
  • Output Data Models: What data comes out
graph LR
    A[Input DataModel] --> B[Synalinks]
    B --> C[Auto-Generated Prompt]
    C --> D[LLM]
    D --> E[Output DataModel]

The framework automatically constructs prompts from your data model definitions.

2. Data Models and Fields

Data models define the structure of your inputs and outputs. Use Field to add descriptions that help the LLM understand what each field should contain:

class Answer(synalinks.DataModel):
    thinking: str = synalinks.Field(
        description="Your step by step reasoning"
    )
    answer: str = synalinks.Field(
        description="The final answer"
    )

3. Constrained Structured Output

Synalinks uses constrained structured output to ensure LLM responses always match your data model specification. No parsing errors!

4. Session Management

Always clear the session at the start of scripts to ensure reproducible module naming:

synalinks.clear_session()

Building a Simple Program

Here's a complete example that creates a question-answering program:

import asyncio
from dotenv import load_dotenv
import synalinks

# Define input data model
class Query(synalinks.DataModel):
    query: str = synalinks.Field(description="The user query to answer")

# Define output data model with chain-of-thought
class AnswerWithThinking(synalinks.DataModel):
    thinking: str = synalinks.Field(description="Your step by step thinking process")
    answer: str = synalinks.Field(description="The correct answer based on your thinking")

async def main():
    load_dotenv()
    synalinks.clear_session()

    # Initialize a language model
    language_model = synalinks.LanguageModel(model="openai/gpt-4.1-mini")

    # Build the program using the Functional API
    inputs = synalinks.Input(data_model=Query)
    outputs = await synalinks.Generator(
        data_model=AnswerWithThinking,
        language_model=language_model,
    )(inputs)

    program = synalinks.Program(
        inputs=inputs,
        outputs=outputs,
        name="chain_of_thought_qa",
    )

    # Run the program
    result = await program(Query(query="What is 2 + 2?"))
    print(f"Thinking: {result['thinking']}")
    print(f"Answer: {result['answer']}")

asyncio.run(main())

By adding a thinking field to our output model, we instruct the LLM to show its reasoning - this is called "Chain of Thought" prompting, achieved simply by defining the output structure!

Key Takeaways

  • No Prompt Engineering: Define data models instead of writing prompts - the framework generates prompts automatically from your schemas.
  • Structured Output: All LLM responses are guaranteed to match your data model specification through constrained generation.
  • Field Descriptions: Use descriptive Field annotations to guide the LLM on what each field should contain.
  • Chain of Thought: Add a "thinking" field to your output model to get step-by-step reasoning from the LLM.

API References

Answer

Bases: DataModel

A simple answer from the LLM.

Source code in examples/0_first_steps.py
class Answer(synalinks.DataModel):
    """A simple answer from the LLM."""

    answer: str = synalinks.Field(
        description="The correct answer to the query",
    )

AnswerWithThinking

Bases: DataModel

An answer with step-by-step reasoning.

By adding a 'thinking' field, we instruct the LLM to show its work. This is called "Chain of Thought" prompting - but we achieve it simply by defining the output structure!

Source code in examples/0_first_steps.py
class AnswerWithThinking(synalinks.DataModel):
    """An answer with step-by-step reasoning.

    By adding a 'thinking' field, we instruct the LLM to show its work.
    This is called "Chain of Thought" prompting - but we achieve it
    simply by defining the output structure!
    """

    thinking: str = synalinks.Field(
        description="Your step by step thinking process",
    )
    answer: str = synalinks.Field(
        description="The correct answer based on your thinking",
    )

Query

Bases: DataModel

The input to our program - a user's question.

The docstring becomes part of the schema description.

Source code in examples/0_first_steps.py
class Query(synalinks.DataModel):
    """The input to our program - a user's question.

    The docstring becomes part of the schema description.
    """

    query: str = synalinks.Field(
        description="The user query to answer",
    )

setup()

Setup Synalinks for use.

Source code in examples/0_first_steps.py
def setup():
    """Setup Synalinks for use."""
    # Check version
    print(f"Synalinks version: {synalinks.__version__}")

    # Clear the global context for reproducible naming
    # This ensures modules get consistent names across runs
    synalinks.clear_session()

show_prompt_template()

Display the default prompt template.

Source code in examples/0_first_steps.py
def show_prompt_template():
    """Display the default prompt template."""
    print("=" * 60)
    print("Default Prompt Template")
    print("=" * 60)
    print()
    print("Synalinks automatically constructs prompts using this template:")
    print()
    print(synalinks.default_prompt_template())
    print()
    print("-" * 60)
    print("The template uses Markdown headers for structure.")
    print("Your data model schemas are automatically inserted!")
    print()