First Steps
First Steps with Synalinks
Welcome to Synalinks! This lesson covers the essential concepts you need to understand before building AI applications.
Installation
Key Concepts
1. No Traditional Prompting
In Synalinks, you don't write prompts manually. Instead, you define:
- Input Data Models: What data goes into your program
- Output Data Models: What data comes out
graph LR
A[Input DataModel] --> B[Synalinks]
B --> C[Auto-Generated Prompt]
C --> D[LLM]
D --> E[Output DataModel]
The framework automatically constructs prompts from your data model definitions.
2. Data Models and Fields
Data models define the structure of your inputs and outputs. Use Field to
add descriptions that help the LLM understand what each field should contain:
class Answer(synalinks.DataModel):
thinking: str = synalinks.Field(
description="Your step by step reasoning"
)
answer: str = synalinks.Field(
description="The final answer"
)
3. Constrained Structured Output
Synalinks uses constrained structured output to ensure LLM responses always match your data model specification. No parsing errors!
4. Session Management
Always clear the session at the start of scripts to ensure reproducible module naming:
Building a Simple Program
Here's a complete example that creates a question-answering program:
import asyncio
from dotenv import load_dotenv
import synalinks
# Define input data model
class Query(synalinks.DataModel):
query: str = synalinks.Field(description="The user query to answer")
# Define output data model with chain-of-thought
class AnswerWithThinking(synalinks.DataModel):
thinking: str = synalinks.Field(description="Your step by step thinking process")
answer: str = synalinks.Field(description="The correct answer based on your thinking")
async def main():
load_dotenv()
synalinks.clear_session()
# Initialize a language model
language_model = synalinks.LanguageModel(model="openai/gpt-4.1-mini")
# Build the program using the Functional API
inputs = synalinks.Input(data_model=Query)
outputs = await synalinks.Generator(
data_model=AnswerWithThinking,
language_model=language_model,
)(inputs)
program = synalinks.Program(
inputs=inputs,
outputs=outputs,
name="chain_of_thought_qa",
)
# Run the program
result = await program(Query(query="What is 2 + 2?"))
print(f"Thinking: {result['thinking']}")
print(f"Answer: {result['answer']}")
asyncio.run(main())
By adding a thinking field to our output model, we instruct the LLM to show
its reasoning - this is called "Chain of Thought" prompting, achieved simply
by defining the output structure!
Key Takeaways
- No Prompt Engineering: Define data models instead of writing prompts - the framework generates prompts automatically from your schemas.
- Structured Output: All LLM responses are guaranteed to match your data model specification through constrained generation.
- Field Descriptions: Use descriptive
Fieldannotations to guide the LLM on what each field should contain. - Chain of Thought: Add a "thinking" field to your output model to get step-by-step reasoning from the LLM.
API References
Answer
AnswerWithThinking
Bases: DataModel
An answer with step-by-step reasoning.
By adding a 'thinking' field, we instruct the LLM to show its work. This is called "Chain of Thought" prompting - but we achieve it simply by defining the output structure!
Source code in examples/0_first_steps.py
Query
Bases: DataModel
The input to our program - a user's question.
The docstring becomes part of the schema description.
Source code in examples/0_first_steps.py
setup()
Setup Synalinks for use.
Source code in examples/0_first_steps.py
show_prompt_template()
Display the default prompt template.