Sample¶
vmc/sample/sampler
1class Sampler:
2
3 def __init__(
4 self,
5 nqs: DDP,
6 ele_info: ElectronInfo,
7 eloc_param: Optional[dict],
8 n_sample: int = 100,
9 start_iter: int = 100,
10 start_n_sample: Optional[int] = None,
11 # therm_step: int = 2000,
12 debug_exact: bool = False,
13 seed: int = 100,
14 dtype=torch.double,
15 method_sample="AR",
16 use_same_tree: bool = False,
17 max_n_sample: Optional[int] = None,
18 max_unique_sample: Optional[int] = None,
19 only_AD: bool = False,
20 only_sample: bool = False,
21 use_sample_space: bool = False,
22 min_batch: int = 10000,
23 min_tree_height: Optional[int] = None,
24 det_lut: Optional[DetLUT] = None,
25 use_dfs_sample: bool = False,
26 use_spin_raising: bool = False,
27 spin_raising_coeff: float = 1.0,
28 given_state: Optional[Tensor] = None,
29 use_spin_flip: bool = False,
30 ) -> None:
eloc-param¶
1from utils.enums import ElocMethod
2from vmc.sample import ElocParams
3eloc_param: ElocParams = {
4 "method": ElocMethod.REDUCE,
5 "use_unique": False,
6 "use_LUT": False,
7 "eps": 1e-2,
8 "eps_sample": 100,
9 # "alpha": 1.5,
10 # "max_memory": 5,
11 "batch": 1024,
12 "fp_batch": 300000,
13}
method:ElocMethod.SIMPLE,ElocMethod.REDUCEandElocMethod.SAMPLE_SPACEuse_unique: Remove duplicate \(n^{\prime}\) , which gives a nice speedup in small systems.use_LUT: Use LookUp-table to reduce \(\psi(n^{\prime})\). This must beTrueifmethod = ElocMethod.SAMPLE_SPACE.eps, eps_sample: \(\epsilon, N\) see: Method. This is necessary ifMethod = ElocMethod.REDUCE.batch, fp_batch: the nbatch of eloc and the nbatch of the forward, Default: -1. This is required ifMethod = ElocMethod.REDUCEorElocMethod.SIMPLE.alpha, max_memory:: the max of the memory whenMethod = ElocMethod.SAMPLE_SPACE.
Notes:
use_unique = Falsein the large systems(e.g. H50, STO-6G, aoa-basis).use_LUT = Falsein the large systems or the multi-node(e.g. world-size > 16).
sample-param¶
1sampler_param = {
2 "n_sample": int(2 * 1e5),
3 "start_n_sample": int(2 * 1.0e5),
4 "start_iter": 200,
5 # "max_n_sample": int(1.0e8),
6 # "max_unique_sample": int(6 * 1.0e4),
7 "debug_exact": False, # exact optimization
8 "seed": 123,
9 "method_sample": "AR",
10 # "given_state": given_state,
11 "only_AD": False,
12 "min_batch": 80000,
13 # "det_lut": det_lut, # only use in CI-NQS exact optimization
14 "use_same_tree": True, # different rank-sample
15 "min_tree_height": 12, # different rank-sample
16 "use_dfs_sample": True,
17 "eloc_param": eloc_param,
18 "use_spin_flip": False,
19}
n_sample: the number of the sampling.start_n_sample, start_iter: the number of the sampling in the first n iteration.max_n_sample, max_unique_sample: the max of the n-sample and unique-sample, which used to restrict the sampling.debug_exact: exact optimization, the unique-sample is equal to the FCI-space dim.seed: the random-seed of the sampling.method_sample: the method of the sampling. This currently only supports AR (Auto regressive) when the world-size great 1.only_AD: No sampling, random samples are selected to check the backward memory usage ratio.min_batch: the batch of the sampling.use_same_tree, min_tree_height: different rank-sample. There must are selected carefully if the word-size great 1.use_dfs_sample: the DFS (Depth first search) or BFS (Breadth first search) sampling.eloc_param: see eloc-paramuse_spin_flip: see: Spin-flip,from utils.public_function import SpinProjection; SpinProjection.init(N=nele, S=0)
Notes:
min_batch, use_same_tree, min_tree_height, use_dfs_sample: These are implemented in the Ansatz(e.g. MPS-RNN, Transformer)