Conversational Applications
Conversational Applications
Synalinks is designed to handle conversational applications as well as
query-based systems. In the case of a conversational applications, the
input data model is a list of chat messages, and the output an individual
chat message. The Program is in that case responsible of handling a
single conversation turn.
sequenceDiagram
participant User
participant Program
participant LLM
User->>Program: ChatMessages [msg1, msg2, ...]
Program->>LLM: Full conversation context
LLM-->>Program: New response
Program-->>User: ChatMessage (assistant)
Note over User,Program: Add response to history
User->>Program: ChatMessages [..., new_msg]
inputs = synalinks.Input(data_model=synalinks.ChatMessages)
outputs = await synalinks.Generator(
language_model=language_model,
streaming=False,
)(inputs)
program = synalinks.Program(
inputs=inputs,
outputs=outputs,
name="simple_chatbot",
)
By default, if no data_model/schema is provided to the Generator it will
output a ChatMessage like output. If the data model is None, then you
can enable streaming.
To use the chatbot, pass a ChatMessages object with the conversation history:
input_messages = synalinks.ChatMessages(
messages=[
synalinks.ChatMessage(
role="user",
content="Hello! What is the capital of France?",
)
]
)
response = await program(input_messages)
Note: Streaming is disabled during training and should only be used in
the last Generator of your pipeline.
Key Takeaways
- Conversational Flow Management: Synalinks effectively manages conversational applications by handling inputs as a list of chat messages and generating individual chat messages as outputs.
- Streaming and Real-Time Interaction: Synalinks supports streaming for
real-time interactions. However, streaming is disabled during training
and should be used only in the final
Generator. - Simple Setup: Just use
ChatMessagesas input data model and theGeneratorwill handle the conversation context automatically.
