Skip to content

Base Optimizer class

Bases: SynalinksSaveable

Optimizer base class: all Synalinks optimizers inherit from this class.

This abstract base class provides the common infrastructure for all optimizers in Synalinks.

Concrete optimizer implementations must inherit from this class and implement the propose_new_candidates() method with their specific optimization logic.

Parameters:

Name Type Description Default
population_size int

The maximum number of best candidates to keep during the optimization process.

10
name str

Optional. The name of the optimizer.

None
description str

Optional. The description of the optimizer.

None
Source code in synalinks/src/optimizers/optimizer.py
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
class Optimizer(SynalinksSaveable):
    """Optimizer base class: all Synalinks optimizers inherit from this class.

    This abstract base class provides the common infrastructure for all optimizers in Synalinks.

    Concrete optimizer implementations must inherit from this class and implement
    the `propose_new_candidates()` method with their specific optimization logic.

    Args:
        population_size (int): The maximum number of best candidates to keep
            during the optimization process.
        name (str): Optional. The name of the optimizer.
        description (str): Optional. The description of the optimizer.
    """

    def __init__(
        self,
        merging_rate=0.02,
        population_size=10,
        name=None,
        description=None,
        **kwargs,
    ):
        """Initialize the base optimizer.

        Sets up the optimizer's internal state, variable tracking, and naming.

        Args:
            population_size (int): The maximum number of best candidates to keep
                during the optimization process.
            name (str): Optional name for the optimizer instance
            description (str): Optional description for the optimizer
            **kwargs (keyword params): Additional arguments (will raise error if provided)

        Raises:
            ValueError: If unexpected keyword arguments are provided
        """
        self._lock = False

        if kwargs:
            raise ValueError(f"Argument(s) not recognized: {kwargs}")

        self.merging_rate = merging_rate
        self.population_size = population_size

        if name is None:
            name = auto_name(self.__class__.__name__)
        self.name = name

        if description is None:
            if self.__class__.__doc__:
                description = docstring_parser.parse(
                    self.__class__.__doc__
                ).short_description
            else:
                description = ""
        self.description = description

        self.built = False
        self._program = None
        self._optimizer = None

        self._initialize_tracker()

        with backend.name_scope(self.name, caller=self):
            iterations = backend.Variable(
                initializer=Empty(data_model=Iterations),
                data_model=Iterations,
                trainable=False,
            )
        self._iterations = iterations

    @property
    def iterations(self):
        """Get the current iteration count.

        Returns:
            (int): Number of optimization iterations performed
        """
        return self._iterations.get("iterations")

    @property
    def epochs(self):
        """Get the current epoch number.

        Returns:
            (int): Number of epochs performed
        """
        return self._iterations.get("epochs")

    def increment_iterations(self):
        """Increment the iteration counter by 1.

        This method is called after each optimization step to track progress.
        """
        iterations = self._iterations.get("iterations")
        self._iterations.update({"iterations": iterations + 1})

    def increment_epochs(self):
        """Increment the epoch counter by 1.

        This method is called after each epoch step to track progress.
        """
        iterations = self._iterations.get("epochs")
        self._iterations.update({"epochs": iterations + 1})

    def set_program(self, program):
        """Set the program that this optimizer will optimize.

        The program contains the model/pipeline that the optimizer will work on.

        Args:
            program (Program): The Synalinks program to optimize
        """
        self._program = program

    @property
    def program(self):
        """Get the program associated with this optimizer.

        Returns:
            (Program): The Synalinks program being optimized, or None if not set
        """
        return self._program

    def set_meta_optimizer(self, meta_optimizer):
        """Set the meta optimizer associated with this optimizer.

        Args:
            meta_optimizer (Optimizer): The meta optimizer
        """
        self._meta_optimizer = meta_optimizer

    @property
    def meta_optimizer(self):
        """Get the optimizer associated with this optimizer.

        Returns:
            (Optimizer): The meta optimizeer
        """
        return self._meta_optimizer

    @property
    def reward_tracker(self):
        """Get the reward tracker from the associated program.

        The reward tracker monitors the performance/rewards during optimization.

        Returns:
            (RewardTracker): The reward tracker from the program, or None if no program is set
        """
        if self._program:
            return self._program._reward_tracker
        return None

    @tracking.no_automatic_dependency_tracking
    def _initialize_tracker(self):
        if hasattr(self, "_tracker"):
            return

        trainable_variables = []
        non_trainable_variables = []
        modules = []
        self._tracker = tracking.Tracker(
            {
                "trainable_variables": (
                    lambda x: isinstance(x, backend.Variable) and x.trainable,
                    trainable_variables,
                ),
                "non_trainable_variables": (
                    lambda x: isinstance(x, backend.Variable) and not x.trainable,
                    non_trainable_variables,
                ),
                "modules": (
                    lambda x: isinstance(x, Module) and not isinstance(x, Metric),
                    modules,
                ),
            },
            exclusions={"non_trainable_variables": ["trainable_variables"]},
        )
        self._trainable_variables = trainable_variables
        self._non_trainable_variables = non_trainable_variables
        self._modules = modules

    def __setattr__(self, name, value):
        # Track Variables, Modules, Metrics.
        if name != "_tracker":
            if not hasattr(self, "_tracker"):
                self._initialize_tracker()
            value = self._tracker.track(value)
        return super().__setattr__(name, value)

    @property
    def variables(self):
        return self._non_trainable_variables[:] + self._trainable_variables[:]

    @property
    def non_trainable_variables(self):
        return self._non_trainable_variables[:]

    @property
    def trainable_variables(self):
        variables = []
        for module in self._modules:
            variables.extend(module.trainable_variables)
        return variables

    def save_own_variables(self, store):
        """Get the state of this optimizer object."""
        for i, variable in enumerate(self.variables):
            store[str(i)] = variable.numpy()

    def load_own_variables(self, store):
        """Set the state of this optimizer object."""
        if len(store.keys()) != len(self.variables):
            msg = (
                f"Skipping variable loading for optimizer '{self.name}', "
                f"because it has {len(self.variables)} variables whereas "
                f"the saved optimizer has {len(store.keys())} variables. "
            )
            if len(self.variables) == 0:
                msg += (
                    "This is likely because the optimizer has not been called/built yet."
                )
            warnings.warn(msg, stacklevel=2)
            return
        for i, variable in enumerate(self.variables):
            variable.assign(store[str(i)])

    def _check_super_called(self):
        if not hasattr(self, "_lock"):
            raise RuntimeError(
                f"In optimizer '{self.__class__.__name__}', you forgot to call "
                "`super().__init__()` as the first statement "
                "in the `__init__()` method. "
                "Go add it!"
            )

    async def select_variable_name_to_update(self, trainable_variables):
        rewards = []
        for trainable_variable in trainable_variables:
            nb_visit = trainable_variable.get("nb_visit")
            cumulative_reward = trainable_variable.get("cumulative_reward")
            if nb_visit == 0:
                variable_reward = 100000
            else:
                variable_reward = cumulative_reward / nb_visit
            rewards.append(variable_reward)
        rewards = np.array(rewards)
        inverted_rewards = -rewards
        scaled_rewards = inverted_rewards / self.sampling_temperature
        exp_rewards = np.exp(scaled_rewards - np.max(scaled_rewards))
        probabilities = exp_rewards / np.sum(exp_rewards)
        selected_variable = np.random.choice(
            trainable_variables,
            size=1,
            replace=False,
            p=probabilities,
        ).tolist()[0]
        return selected_variable.name

    async def select_evolving_strategy(self):
        rand = random.random()
        if rand > (self.merging_rate * self.epochs):
            return "mutation"
        else:
            return "crossover"

    async def select_candidate_to_merge(
        self,
        step,
        trainable_variable,
    ):
        best_candidates = trainable_variable.get("best_candidates")
        if len(best_candidates) > 0:
            selected_candidate = random.choice(best_candidates)
            return selected_candidate
        return None

    async def on_train_begin(
        self,
        trainable_variables,
    ):
        """Called at the beginning of the training

        Args:
            trainable_variables (list): The list of trainable variables.
        """
        mask = list(Trainable.keys())
        mask.remove("examples")

        for trainable_variable in trainable_variables:
            seed_candidates = trainable_variable.get("seed_candidates")
            masked_variable = out_mask_json(
                trainable_variable.get_json(),
                mask=mask,
            )
            seed_candidates.append(
                {
                    **masked_variable,
                }
            )

    async def on_train_end(
        self,
        trainable_variables,
    ):
        """Called at the end of the training

        Args:
            trainable_variables (list): The list of trainable variables
        """
        for variable in trainable_variables:
            best_candidates = variable.get("best_candidates")
            sorted_candidates = sorted(
                best_candidates,
                key=lambda x: x.get("reward"),
                reverse=True,
            )
            best_candidate = sorted_candidates[0]
            best_candidate = out_mask_json(
                best_candidate,
                mask=["reward"],
            )
            variable.update(
                {
                    **best_candidate,
                },
            )

    async def on_epoch_begin(
        self,
        epoch,
        trainable_variables,
    ):
        """Called at the beginning of an epoch

        Args:
            epoch (int): The epoch number
            trainable_variables (list): The list of trainable variables
        """
        for trainable_variable in trainable_variables:
            trainable_variable.update(
                {
                    "predictions": [],
                    "candidates": [],
                }
            )

    async def on_epoch_end(
        self,
        epoch,
        trainable_variables,
    ):
        """Called at the end of an epoch

        Args:
            epoch (int): The epoch number
            trainable_variables (list): The list of trainable variables
        """
        for trainable_variable in trainable_variables:
            candidates = trainable_variable.get("candidates")
            best_candidates = trainable_variable.get("best_candidates")
            all_candidates = candidates + best_candidates
            sorted_candidates = sorted(
                all_candidates,
                key=lambda x: x.get("reward"),
                reverse=True,
            )
            selected_candidates = sorted_candidates[: self.population_size]
            trainable_variable.update(
                {
                    "predictions": [],
                    "candidates": [],
                    "best_candidates": selected_candidates,
                }
            )
        self.increment_epochs()

    async def on_batch_begin(
        self,
        step,
        epoch,
        trainable_variables,
    ):
        """Called at the beginning of a batch

        Args:
            step (int): The batch number
            epoch (int): The epoch number
            trainable_variables (list): The list of trainable variables
        """
        for trainable_variable in trainable_variables:
            best_candidates = trainable_variable.get("best_candidates")
            if epoch == 0:
                seed_candidates = trainable_variable.get("seed_candidates")
                if len(seed_candidates) > 0:
                    seed_candidate = random.choice(seed_candidates)
                    trainable_variable.update(
                        {
                            **seed_candidate,
                        },
                    )
            else:
                if len(best_candidates) > 0:
                    best_candidate = random.choice(best_candidates)
                    best_candidate = out_mask_json(
                        best_candidate,
                        mask=["reward"],
                    )
                    trainable_variable.update(
                        {
                            **best_candidate,
                        },
                    )
                else:
                    seed_candidates = trainable_variable.get("seed_candidates")
                    if len(seed_candidates) > 0:
                        seed_candidate = random.choice(seed_candidates)
                        trainable_variable.update(
                            {
                                **seed_candidate,
                            },
                        )
            trainable_variable.update(
                {
                    "nb_visit": 0,
                    "cumulative_reward": 0.0,
                },
            )

    async def on_batch_end(
        self,
        step,
        epoch,
        trainable_variables,
    ):
        """Called at the end of a batch

        Args:
            step (int): The batch number
            epoch (int): The epoch number
            trainable_variables (list): The list of trainable variables
        """
        for trainable_variable in trainable_variables:
            best_candidates = trainable_variable.get("best_candidates")
            if len(best_candidates) > 0:
                sorted_candidates = sorted(
                    best_candidates,
                    key=lambda x: x.get("reward"),
                    reverse=True,
                )
                best_candidate = sorted_candidates[0]
                history = trainable_variable.get("history")
                if len(history) > 0:
                    last_candidate = history[-1]
                    if last_candidate != best_candidate:
                        history.append(best_candidate)
                else:
                    history.append(best_candidate)
                best_candidate = out_mask_json(
                    best_candidate,
                    mask=["reward"],
                )
                trainable_variable.update(
                    {
                        **best_candidate,
                    },
                )
        self.increment_iterations()

    async def optimize(
        self,
        step,
        trainable_variables,
        x=None,
        y=None,
        val_x=None,
        val_y=None,
    ):
        """Method for performing optimization.

        Args:
            step (int): The training step.
            trainable_variables (list): Variables to be optimized
            x (np.ndarray): Training batch input data. Must be array-like.
            y (np.ndarray): Training batch target data. Must be array-like.
            val_x (np.ndarray): Input validation data. Must be array-like.
            val_y (np.ndarray): Target validation data. Must be array-like.
        """
        self._check_super_called()
        if not self.built:
            await self.build(trainable_variables)

        if self.meta_optimizer and not self.meta_optimizer.built:
            await self.meta_optimizer.build(self.trainable_variables)

        y_pred = await self.program.predict_on_batch(
            x=x,
            training=True,
        )

        reward = await self.program.compute_reward(
            x=x,
            y=y,
            y_pred=y_pred,
        )

        await self.assign_reward_to_predictions(
            trainable_variables,
            reward=reward,
        )

        if self.trainable_variables and self.meta_optimizer and step > 0:
            await self.meta_optimizer.assign_reward_to_predictions(
                self.trainable_variables,
                reward=reward,
            )

            await self.meta_optimizer.propose_new_candidates(
                step,
                self.trainable_variables,
            )

        await self.propose_new_candidates(
            step,
            trainable_variables,
            x=x,
            y=y,
            y_pred=y_pred,
            training=self.trainable_variables and self.meta_optimizer,
        )

        y_pred = await self.program.predict_on_batch(
            x=val_x,
            training=False,
        )

        reward = await self.program.compute_reward(
            x=val_x,
            y=val_y,
            y_pred=y_pred,
        )

        for trainable_variable in trainable_variables:
            await self.maybe_add_candidate(
                step,
                trainable_variable,
                reward=reward,
            )

        if self.trainable_variables and self.meta_optimizer:
            for trainable_variable in self.trainable_variables:
                await self.meta_optimizer.maybe_add_candidate(
                    step,
                    trainable_variable,
                    reward=reward,
                )

        await self.reward_tracker.update_state(reward)
        metrics = await self.program.compute_metrics(val_x, val_y, y_pred)
        return metrics

    async def propose_new_candidates(
        self,
        step,
        trainable_variables,
        x=None,
        y=None,
        y_pred=None,
    ):
        raise NotImplementedError(
            "Optimizer subclasses must implement the `propose_new_candidates()` method."
        )

    async def assign_reward_to_predictions(
        self,
        trainable_variables,
        reward=None,
    ):
        """Assign rewards to predictions that don't have them yet.

        This method updates all predictions in trainable variables that have
        None as their reward value. It's typically called after computing
        rewards for a batch of predictions.

        Args:
            trainable_variables (list): Variables containing predictions
            reward (float): Reward value to assign (defaults to 0.0 if None/False)
        """
        if not reward:
            reward = 0.0
        for trainable_variable in trainable_variables:
            predictions = trainable_variable.get("predictions")
            for p in predictions:
                if p["reward"] is None:
                    p["reward"] = reward
                    nb_visit = trainable_variable.get("nb_visit")
                    cumulative_reward = trainable_variable.get("cumulative_reward")
                    trainable_variable.update(
                        {
                            "nb_visit": nb_visit + 1,
                            "cumulative_reward": cumulative_reward + reward,
                        }
                    )

    async def assign_candidate(
        self,
        trainable_variable,
        new_candidate=None,
        examples=None,
    ):
        """Assign a new candidate configuration to a trainable variable.

        This method updates a variable with either a complete new candidate
        or just new examples for few-shot learning.

        Args:
            trainable_variable (Variable): The variable to update
            new_candidate (JsonDataModel): New candidate (optional)
            examples (list): New examples for few-shot learning (optional)
        """
        if new_candidate:
            if examples:
                # Update with both new candidate and examples
                trainable_variable.update(
                    {
                        **new_candidate.get_json(),
                        "examples": examples,
                    },
                )
            else:
                # Update with just new candidate
                trainable_variable.update(
                    {
                        **new_candidate.get_json(),
                    },
                )
        elif examples:
            # Update with just new examples
            trainable_variable.update(
                {
                    "examples": examples,
                },
            )

    async def maybe_add_candidate(
        self,
        step,
        trainable_variable,
        new_candidate=None,
        examples=None,
        reward=None,
    ):
        """Maybe add new candidate to candidates.

        Args:
            step (int): The training step.
            trainable_variable (Variable): The variable to add candidate to.
            new_candidate (dict): New candidate configuration (optional).
            examples (list): New examples for few-shot learning (optional).
            reward (float): The candidate reward.
        """
        if not reward:
            reward = 0.0
        mask = list(Trainable.keys())
        mask.append("reward")
        if new_candidate:
            new_candidate = out_mask_json(
                new_candidate.get_json(),
                mask=mask,
            )
        else:
            new_candidate = out_mask_json(
                trainable_variable.get_json(),
                mask=mask,
            )
        if not examples:
            examples = trainable_variable.get("examples")

        candidates = trainable_variable.get("candidates")
        best_candidates = trainable_variable.get("best_candidates")
        all_candidates = best_candidates + candidates
        is_present = False
        for candidate in all_candidates:
            if out_mask_json(candidate, mask=mask) == new_candidate:
                is_present = True
                break
        if not is_present:
            candidates.append(
                {
                    **new_candidate,
                    "examples": examples,
                    "reward": reward,
                }
            )

    def get_config(self):
        return {
            "merging_rate": self.merging_rate,
            "population_size": self.population_size,
            "name": self.name,
            "description": self.description,
        }

    @classmethod
    def from_config(cls, config):
        return cls(**config)

    def __repr__(self):
        return f"<Optimizer name={self.name} description={self.description}>"

epochs property

Get the current epoch number.

Returns:

Type Description
int

Number of epochs performed

iterations property

Get the current iteration count.

Returns:

Type Description
int

Number of optimization iterations performed

meta_optimizer property

Get the optimizer associated with this optimizer.

Returns:

Type Description
Optimizer

The meta optimizeer

program property

Get the program associated with this optimizer.

Returns:

Type Description
Program

The Synalinks program being optimized, or None if not set

reward_tracker property

Get the reward tracker from the associated program.

The reward tracker monitors the performance/rewards during optimization.

Returns:

Type Description
RewardTracker

The reward tracker from the program, or None if no program is set

__init__(merging_rate=0.02, population_size=10, name=None, description=None, **kwargs)

Initialize the base optimizer.

Sets up the optimizer's internal state, variable tracking, and naming.

Parameters:

Name Type Description Default
population_size int

The maximum number of best candidates to keep during the optimization process.

10
name str

Optional name for the optimizer instance

None
description str

Optional description for the optimizer

None
**kwargs keyword params

Additional arguments (will raise error if provided)

{}

Raises:

Type Description
ValueError

If unexpected keyword arguments are provided

Source code in synalinks/src/optimizers/optimizer.py
def __init__(
    self,
    merging_rate=0.02,
    population_size=10,
    name=None,
    description=None,
    **kwargs,
):
    """Initialize the base optimizer.

    Sets up the optimizer's internal state, variable tracking, and naming.

    Args:
        population_size (int): The maximum number of best candidates to keep
            during the optimization process.
        name (str): Optional name for the optimizer instance
        description (str): Optional description for the optimizer
        **kwargs (keyword params): Additional arguments (will raise error if provided)

    Raises:
        ValueError: If unexpected keyword arguments are provided
    """
    self._lock = False

    if kwargs:
        raise ValueError(f"Argument(s) not recognized: {kwargs}")

    self.merging_rate = merging_rate
    self.population_size = population_size

    if name is None:
        name = auto_name(self.__class__.__name__)
    self.name = name

    if description is None:
        if self.__class__.__doc__:
            description = docstring_parser.parse(
                self.__class__.__doc__
            ).short_description
        else:
            description = ""
    self.description = description

    self.built = False
    self._program = None
    self._optimizer = None

    self._initialize_tracker()

    with backend.name_scope(self.name, caller=self):
        iterations = backend.Variable(
            initializer=Empty(data_model=Iterations),
            data_model=Iterations,
            trainable=False,
        )
    self._iterations = iterations

assign_candidate(trainable_variable, new_candidate=None, examples=None) async

Assign a new candidate configuration to a trainable variable.

This method updates a variable with either a complete new candidate or just new examples for few-shot learning.

Parameters:

Name Type Description Default
trainable_variable Variable

The variable to update

required
new_candidate JsonDataModel

New candidate (optional)

None
examples list

New examples for few-shot learning (optional)

None
Source code in synalinks/src/optimizers/optimizer.py
async def assign_candidate(
    self,
    trainable_variable,
    new_candidate=None,
    examples=None,
):
    """Assign a new candidate configuration to a trainable variable.

    This method updates a variable with either a complete new candidate
    or just new examples for few-shot learning.

    Args:
        trainable_variable (Variable): The variable to update
        new_candidate (JsonDataModel): New candidate (optional)
        examples (list): New examples for few-shot learning (optional)
    """
    if new_candidate:
        if examples:
            # Update with both new candidate and examples
            trainable_variable.update(
                {
                    **new_candidate.get_json(),
                    "examples": examples,
                },
            )
        else:
            # Update with just new candidate
            trainable_variable.update(
                {
                    **new_candidate.get_json(),
                },
            )
    elif examples:
        # Update with just new examples
        trainable_variable.update(
            {
                "examples": examples,
            },
        )

assign_reward_to_predictions(trainable_variables, reward=None) async

Assign rewards to predictions that don't have them yet.

This method updates all predictions in trainable variables that have None as their reward value. It's typically called after computing rewards for a batch of predictions.

Parameters:

Name Type Description Default
trainable_variables list

Variables containing predictions

required
reward float

Reward value to assign (defaults to 0.0 if None/False)

None
Source code in synalinks/src/optimizers/optimizer.py
async def assign_reward_to_predictions(
    self,
    trainable_variables,
    reward=None,
):
    """Assign rewards to predictions that don't have them yet.

    This method updates all predictions in trainable variables that have
    None as their reward value. It's typically called after computing
    rewards for a batch of predictions.

    Args:
        trainable_variables (list): Variables containing predictions
        reward (float): Reward value to assign (defaults to 0.0 if None/False)
    """
    if not reward:
        reward = 0.0
    for trainable_variable in trainable_variables:
        predictions = trainable_variable.get("predictions")
        for p in predictions:
            if p["reward"] is None:
                p["reward"] = reward
                nb_visit = trainable_variable.get("nb_visit")
                cumulative_reward = trainable_variable.get("cumulative_reward")
                trainable_variable.update(
                    {
                        "nb_visit": nb_visit + 1,
                        "cumulative_reward": cumulative_reward + reward,
                    }
                )

increment_epochs()

Increment the epoch counter by 1.

This method is called after each epoch step to track progress.

Source code in synalinks/src/optimizers/optimizer.py
def increment_epochs(self):
    """Increment the epoch counter by 1.

    This method is called after each epoch step to track progress.
    """
    iterations = self._iterations.get("epochs")
    self._iterations.update({"epochs": iterations + 1})

increment_iterations()

Increment the iteration counter by 1.

This method is called after each optimization step to track progress.

Source code in synalinks/src/optimizers/optimizer.py
def increment_iterations(self):
    """Increment the iteration counter by 1.

    This method is called after each optimization step to track progress.
    """
    iterations = self._iterations.get("iterations")
    self._iterations.update({"iterations": iterations + 1})

load_own_variables(store)

Set the state of this optimizer object.

Source code in synalinks/src/optimizers/optimizer.py
def load_own_variables(self, store):
    """Set the state of this optimizer object."""
    if len(store.keys()) != len(self.variables):
        msg = (
            f"Skipping variable loading for optimizer '{self.name}', "
            f"because it has {len(self.variables)} variables whereas "
            f"the saved optimizer has {len(store.keys())} variables. "
        )
        if len(self.variables) == 0:
            msg += (
                "This is likely because the optimizer has not been called/built yet."
            )
        warnings.warn(msg, stacklevel=2)
        return
    for i, variable in enumerate(self.variables):
        variable.assign(store[str(i)])

maybe_add_candidate(step, trainable_variable, new_candidate=None, examples=None, reward=None) async

Maybe add new candidate to candidates.

Parameters:

Name Type Description Default
step int

The training step.

required
trainable_variable Variable

The variable to add candidate to.

required
new_candidate dict

New candidate configuration (optional).

None
examples list

New examples for few-shot learning (optional).

None
reward float

The candidate reward.

None
Source code in synalinks/src/optimizers/optimizer.py
async def maybe_add_candidate(
    self,
    step,
    trainable_variable,
    new_candidate=None,
    examples=None,
    reward=None,
):
    """Maybe add new candidate to candidates.

    Args:
        step (int): The training step.
        trainable_variable (Variable): The variable to add candidate to.
        new_candidate (dict): New candidate configuration (optional).
        examples (list): New examples for few-shot learning (optional).
        reward (float): The candidate reward.
    """
    if not reward:
        reward = 0.0
    mask = list(Trainable.keys())
    mask.append("reward")
    if new_candidate:
        new_candidate = out_mask_json(
            new_candidate.get_json(),
            mask=mask,
        )
    else:
        new_candidate = out_mask_json(
            trainable_variable.get_json(),
            mask=mask,
        )
    if not examples:
        examples = trainable_variable.get("examples")

    candidates = trainable_variable.get("candidates")
    best_candidates = trainable_variable.get("best_candidates")
    all_candidates = best_candidates + candidates
    is_present = False
    for candidate in all_candidates:
        if out_mask_json(candidate, mask=mask) == new_candidate:
            is_present = True
            break
    if not is_present:
        candidates.append(
            {
                **new_candidate,
                "examples": examples,
                "reward": reward,
            }
        )

on_batch_begin(step, epoch, trainable_variables) async

Called at the beginning of a batch

Parameters:

Name Type Description Default
step int

The batch number

required
epoch int

The epoch number

required
trainable_variables list

The list of trainable variables

required
Source code in synalinks/src/optimizers/optimizer.py
async def on_batch_begin(
    self,
    step,
    epoch,
    trainable_variables,
):
    """Called at the beginning of a batch

    Args:
        step (int): The batch number
        epoch (int): The epoch number
        trainable_variables (list): The list of trainable variables
    """
    for trainable_variable in trainable_variables:
        best_candidates = trainable_variable.get("best_candidates")
        if epoch == 0:
            seed_candidates = trainable_variable.get("seed_candidates")
            if len(seed_candidates) > 0:
                seed_candidate = random.choice(seed_candidates)
                trainable_variable.update(
                    {
                        **seed_candidate,
                    },
                )
        else:
            if len(best_candidates) > 0:
                best_candidate = random.choice(best_candidates)
                best_candidate = out_mask_json(
                    best_candidate,
                    mask=["reward"],
                )
                trainable_variable.update(
                    {
                        **best_candidate,
                    },
                )
            else:
                seed_candidates = trainable_variable.get("seed_candidates")
                if len(seed_candidates) > 0:
                    seed_candidate = random.choice(seed_candidates)
                    trainable_variable.update(
                        {
                            **seed_candidate,
                        },
                    )
        trainable_variable.update(
            {
                "nb_visit": 0,
                "cumulative_reward": 0.0,
            },
        )

on_batch_end(step, epoch, trainable_variables) async

Called at the end of a batch

Parameters:

Name Type Description Default
step int

The batch number

required
epoch int

The epoch number

required
trainable_variables list

The list of trainable variables

required
Source code in synalinks/src/optimizers/optimizer.py
async def on_batch_end(
    self,
    step,
    epoch,
    trainable_variables,
):
    """Called at the end of a batch

    Args:
        step (int): The batch number
        epoch (int): The epoch number
        trainable_variables (list): The list of trainable variables
    """
    for trainable_variable in trainable_variables:
        best_candidates = trainable_variable.get("best_candidates")
        if len(best_candidates) > 0:
            sorted_candidates = sorted(
                best_candidates,
                key=lambda x: x.get("reward"),
                reverse=True,
            )
            best_candidate = sorted_candidates[0]
            history = trainable_variable.get("history")
            if len(history) > 0:
                last_candidate = history[-1]
                if last_candidate != best_candidate:
                    history.append(best_candidate)
            else:
                history.append(best_candidate)
            best_candidate = out_mask_json(
                best_candidate,
                mask=["reward"],
            )
            trainable_variable.update(
                {
                    **best_candidate,
                },
            )
    self.increment_iterations()

on_epoch_begin(epoch, trainable_variables) async

Called at the beginning of an epoch

Parameters:

Name Type Description Default
epoch int

The epoch number

required
trainable_variables list

The list of trainable variables

required
Source code in synalinks/src/optimizers/optimizer.py
async def on_epoch_begin(
    self,
    epoch,
    trainable_variables,
):
    """Called at the beginning of an epoch

    Args:
        epoch (int): The epoch number
        trainable_variables (list): The list of trainable variables
    """
    for trainable_variable in trainable_variables:
        trainable_variable.update(
            {
                "predictions": [],
                "candidates": [],
            }
        )

on_epoch_end(epoch, trainable_variables) async

Called at the end of an epoch

Parameters:

Name Type Description Default
epoch int

The epoch number

required
trainable_variables list

The list of trainable variables

required
Source code in synalinks/src/optimizers/optimizer.py
async def on_epoch_end(
    self,
    epoch,
    trainable_variables,
):
    """Called at the end of an epoch

    Args:
        epoch (int): The epoch number
        trainable_variables (list): The list of trainable variables
    """
    for trainable_variable in trainable_variables:
        candidates = trainable_variable.get("candidates")
        best_candidates = trainable_variable.get("best_candidates")
        all_candidates = candidates + best_candidates
        sorted_candidates = sorted(
            all_candidates,
            key=lambda x: x.get("reward"),
            reverse=True,
        )
        selected_candidates = sorted_candidates[: self.population_size]
        trainable_variable.update(
            {
                "predictions": [],
                "candidates": [],
                "best_candidates": selected_candidates,
            }
        )
    self.increment_epochs()

on_train_begin(trainable_variables) async

Called at the beginning of the training

Parameters:

Name Type Description Default
trainable_variables list

The list of trainable variables.

required
Source code in synalinks/src/optimizers/optimizer.py
async def on_train_begin(
    self,
    trainable_variables,
):
    """Called at the beginning of the training

    Args:
        trainable_variables (list): The list of trainable variables.
    """
    mask = list(Trainable.keys())
    mask.remove("examples")

    for trainable_variable in trainable_variables:
        seed_candidates = trainable_variable.get("seed_candidates")
        masked_variable = out_mask_json(
            trainable_variable.get_json(),
            mask=mask,
        )
        seed_candidates.append(
            {
                **masked_variable,
            }
        )

on_train_end(trainable_variables) async

Called at the end of the training

Parameters:

Name Type Description Default
trainable_variables list

The list of trainable variables

required
Source code in synalinks/src/optimizers/optimizer.py
async def on_train_end(
    self,
    trainable_variables,
):
    """Called at the end of the training

    Args:
        trainable_variables (list): The list of trainable variables
    """
    for variable in trainable_variables:
        best_candidates = variable.get("best_candidates")
        sorted_candidates = sorted(
            best_candidates,
            key=lambda x: x.get("reward"),
            reverse=True,
        )
        best_candidate = sorted_candidates[0]
        best_candidate = out_mask_json(
            best_candidate,
            mask=["reward"],
        )
        variable.update(
            {
                **best_candidate,
            },
        )

optimize(step, trainable_variables, x=None, y=None, val_x=None, val_y=None) async

Method for performing optimization.

Parameters:

Name Type Description Default
step int

The training step.

required
trainable_variables list

Variables to be optimized

required
x ndarray

Training batch input data. Must be array-like.

None
y ndarray

Training batch target data. Must be array-like.

None
val_x ndarray

Input validation data. Must be array-like.

None
val_y ndarray

Target validation data. Must be array-like.

None
Source code in synalinks/src/optimizers/optimizer.py
async def optimize(
    self,
    step,
    trainable_variables,
    x=None,
    y=None,
    val_x=None,
    val_y=None,
):
    """Method for performing optimization.

    Args:
        step (int): The training step.
        trainable_variables (list): Variables to be optimized
        x (np.ndarray): Training batch input data. Must be array-like.
        y (np.ndarray): Training batch target data. Must be array-like.
        val_x (np.ndarray): Input validation data. Must be array-like.
        val_y (np.ndarray): Target validation data. Must be array-like.
    """
    self._check_super_called()
    if not self.built:
        await self.build(trainable_variables)

    if self.meta_optimizer and not self.meta_optimizer.built:
        await self.meta_optimizer.build(self.trainable_variables)

    y_pred = await self.program.predict_on_batch(
        x=x,
        training=True,
    )

    reward = await self.program.compute_reward(
        x=x,
        y=y,
        y_pred=y_pred,
    )

    await self.assign_reward_to_predictions(
        trainable_variables,
        reward=reward,
    )

    if self.trainable_variables and self.meta_optimizer and step > 0:
        await self.meta_optimizer.assign_reward_to_predictions(
            self.trainable_variables,
            reward=reward,
        )

        await self.meta_optimizer.propose_new_candidates(
            step,
            self.trainable_variables,
        )

    await self.propose_new_candidates(
        step,
        trainable_variables,
        x=x,
        y=y,
        y_pred=y_pred,
        training=self.trainable_variables and self.meta_optimizer,
    )

    y_pred = await self.program.predict_on_batch(
        x=val_x,
        training=False,
    )

    reward = await self.program.compute_reward(
        x=val_x,
        y=val_y,
        y_pred=y_pred,
    )

    for trainable_variable in trainable_variables:
        await self.maybe_add_candidate(
            step,
            trainable_variable,
            reward=reward,
        )

    if self.trainable_variables and self.meta_optimizer:
        for trainable_variable in self.trainable_variables:
            await self.meta_optimizer.maybe_add_candidate(
                step,
                trainable_variable,
                reward=reward,
            )

    await self.reward_tracker.update_state(reward)
    metrics = await self.program.compute_metrics(val_x, val_y, y_pred)
    return metrics

save_own_variables(store)

Get the state of this optimizer object.

Source code in synalinks/src/optimizers/optimizer.py
def save_own_variables(self, store):
    """Get the state of this optimizer object."""
    for i, variable in enumerate(self.variables):
        store[str(i)] = variable.numpy()

set_meta_optimizer(meta_optimizer)

Set the meta optimizer associated with this optimizer.

Parameters:

Name Type Description Default
meta_optimizer Optimizer

The meta optimizer

required
Source code in synalinks/src/optimizers/optimizer.py
def set_meta_optimizer(self, meta_optimizer):
    """Set the meta optimizer associated with this optimizer.

    Args:
        meta_optimizer (Optimizer): The meta optimizer
    """
    self._meta_optimizer = meta_optimizer

set_program(program)

Set the program that this optimizer will optimize.

The program contains the model/pipeline that the optimizer will work on.

Parameters:

Name Type Description Default
program Program

The Synalinks program to optimize

required
Source code in synalinks/src/optimizers/optimizer.py
def set_program(self, program):
    """Set the program that this optimizer will optimize.

    The program contains the model/pipeline that the optimizer will work on.

    Args:
        program (Program): The Synalinks program to optimize
    """
    self._program = program