Optim

VMCOptimizer

vmc/optim/optimizer/VMCOptimizer

 1class VMCOptimizer(BaseVMCOptimizer):
 2
 3    def __init__(
 4        self,
 5        nqs: DDP,
 6        sampler_param: dict,
 7        electron_info: ElectronInfo,
 8        opt: Optimizer,
 9        lr_scheduler: Union[List[LRScheduler], LRScheduler] = None,
10        max_iter: int = 2000,
11        dtype: Dtype = None,
12        external_model: any = None,
13        check_point: str = None,
14        read_model_only: bool = False,
15        only_sample: bool = False,
16        pre_CI: CIWavefunction = None,
17        pre_train_info: dict = None,
18        clean_opt_state: bool = False,
19        noise_lambda: float = 0.05,
20        sr: bool = False,
21        interval: int = 100,
22        prefix: str = "VMC",
23        MAX_AD_DIM: int = -1,
24        kfac: KFACPreconditioner = None,  # type: ignore
25        use_clip_grad: bool = False,
26        max_grad_norm: float = 1.0,
27        max_grad_value: float = 1.0,
28        start_clip_grad: int = None,
29        clip_grad_method: str = "l2",
30        clip_grad_scheduler: Optional[Callable[[int], float]] = None,
31        use_3sigma: bool = False,
32        k_step_clip: int = 100,
33        use_spin_raising: bool = False,
34        spin_raising_coeff: float = 1.0,
35        only_output_spin_raising: bool = False,
36        spin_raising_scheduler: Optional[Callable[[int], float]] = None,
37    )

opt-params

 1from utils import ElectronInfo, Dtype
 2
 3opt_type = optim.AdamW
 4opt_params = {"lr": 0.001, "betas": (0.9, 0.999)}
 5opt = opt_type(model.parameters(), **opt_params)
 6
 7prefix = "vmc"
 8def clip_grad_scheduler(step):
 9   if step <= 4000:
10      max_grad = 1.0
11   elif step <= 8000:
12      max_grad = 0.1
13   else:
14      max_grad = 0.01
15   return max_grad
16
17vmc_opt_params = {
18    "nqs": model,
19    "opt": opt,
20    # "lr_scheduler": lr_scheduler,
21    # "read_model_only": True,
22    "dtype": dtype,
23    "sampler_param": sampler_param,
24    # "only_sample": True,
25    "electron_info": electron_info,
26    # "use_spin_raising": True,
27    # "spin_raising_coeff": 1.0,
28    # "only_output_spin_raising": True,
29    "max_iter": 5000,
30    "interval": 100,
31    "MAX_AD_DIM": 80000,
32    # "check_point": f"./h50/focus-init/checkpoint/H50-2.00-oao-mps-rnn-dcut-30-222-focus-20w-checkpoint.pth",
33    "prefix": prefix,
34    "use_clip_grad": True,
35    "max_grad_norm": 1,
36    "start_clip_grad": -1,
37    "clip_grad_scheduler": clip_grad_scheduler,
38}
  • nqs: Ansatz(e.g. Transformer, MPS-RNN, Graph-MPS-RNN).

  • opt: Optimizer(e.g., Adam, Adamw, SGD).

  • lr_scheduler: LRScheduler, Default: None.

  • read_model_only: Read model from the checkpoint file.

  • dtype: data-dtype: (e.g., Dtype(dtype=torch.complex128, device="cuda"))

  • sampler_param: see sample-param

  • only_sample: No calculating gradient. This is used to calculate energy.

  • max_iter: the number of the iteration.

  • interval: the time of the saving the checkpoint file.

  • MAX_AD_DIM: the nbatch of the backward.

  • check_point: Read model/optimizer/lr_scheduler from the checkpoint file, Default: None.

  • prefix: the prefix of the checkpoint file, e.g., vmc-checkpoint.pth.

  • use_clip_grad: clip gradient, Default: False.

  • max_grad_norm: the max of the l2-norm when clipping gradient.

  • start_clip_grad: clip gradient from the k-th iteration.

  • clip_grad_scheduler: the scheduler of clipping gradient, this is Callable[[int], float].

  • sr: use minSR.

  • lm: use Linear method.

  • LM_delta: delta in Linear method.

Optimizer

Linear method ref.

    1. Chem. Phys. 152, 024111 (2020); doi: 10.1063/1.5125803

  • PHYSICAL REVIEW RESEARCH 7, 043351 (2025)

Linear method的梯度计算在 pynqs/optim/grad/lm.py 的函数 LM_grad 中, 欲使用之,可直接在class VMCOptimizer 中打开 lm: True 即可。 除此之外,还有超参 \(\delta\) 需要调整(即参数 LM_delta),默认为 \(0.1\), 并按照 delta = max(delta * 0.9**(epoch), 1e-6) 进行衰减。 理论上在优化最后,这项应该衰减至 \(0\).

该段代码包含计算梯度和更新两部分,计算梯度按照 J. Chem. Phys. 152, 024111 (2020) 中的方式实现,具体公式推导见文档。 简而言之,最后是构造一下广义本征值问题(GEVP)并求解:

\[\begin{split}Lc = \tilde{E} Rc,\quad L = \begin{bmatrix} E & G^\top_{\rm r} \\ G_{\rm c} & H \end{bmatrix} + \delta I , R = \begin{bmatrix} 1&\\ &S \end{bmatrix}+\delta'I\end{split}\]

with

\[\begin{split}[G_{\rm r}]_i &= \langle{h_i(n)}\rangle - E\langle{g_i(n)}\rangle\\ [G_{\rm c}]_i &= \sum_n p(n) \bar{\epsilon}(n) O_i(n)\\ S_{ij} &= \sum_n p(n) \bar{o}_i(n)O_j(n)\\ H_{ij} &= \sum_n p(n)\bar{o}_i(n) h_j(n) - [G_{\rm c}]_i\langle{O_j(n)}\rangle\end{split}\]

where

\[\begin{split}\bar{\epsilon}(n) &= E_{\rm loc}(n) - \langle{E_{\rm loc}(n)}\rangle\\ \bar{o}_i(n) &= O_i(n) - \langle{O_i(n)}\rangle\end{split}\]

and

\[h_i(n) = \partial_i E_{\rm loc}(n) + O_i(n)E_{\rm loc}(n),\quad \partial_i E_{\rm loc}(n) = \frac{\partial}{\partial\theta_i}\sum_{m\in SD}H_{nm}\frac{\varPsi(m)}{\varPsi(n)}\]
\[\begin{split}g_i(n) &= \frac{\langle n|\varPsi_i\rangle}{\langle n|\varPsi\rangle} = \frac{1}{\varPsi(n)} \frac{\partial\varPsi(n)}{\partial\theta_i} = O_i(n) \label{eq:gi}\\ h_i(n) &= \frac{\langle n|\hat{H}|\varPsi_i\rangle}{\langle n|\varPsi\rangle} = \partial_i E_{\rm loc}(n) + O_i(n)E_{\rm loc}(n)\end{split}\]

这里 \(|\varPsi_i\rangle = \partial_{\theta_i}|\varPsi\rangle\), 在更新的时候,实现了以上文章中类似线搜索的方式,见 try_step_update 函数。