10th July 2019, 18:02
> As I understand these scores for (5) are the results of, de facto, lstm-compress v3a version.
More or less, yes: it wasn't v3a, I changed the parameters in the source like we now do through CLI.
> I'm trying to set your options in v3a and there are slightly different set of options than you describe (maybe it's mostly matter of names).
In (5) I used the NNCP-style name to easily compare the NNCP and lstm-compress options.
In v3a I used lstm-compress-style name, it's just a question of a slightly different name.
> How can I got your numbers with lstm-compress v3a version?
About with
-c352,3,10,0.2,1,0.0250,0.9999,0.0033,0.000050,0.0250,0.9999,0.00000100,0.050,2.0,0
or
-c352,3,10,0.2,1,0.0250,0.9999,0.0034,0.000050,0.0250,0.9999,0.00000100,0.050,2.0,0
(Remove these spaces ........................................... ^ added by the program of the forum)
They are the default options with cells=352 and adam_alpha_lr=0.0033 or 0.0034.
(5) option v3a option
gradient_clipping=-/+2.0 grad_clip=2.0
n_layer=3 layers=3
hidden_size=352 cells=352
batch_size=10 horizon=10
time_steps=10 Used in NNCP, in lstm-compress is always = batch_size
n_symb=256 vocab_full=1 (vocabulary size fixed to 256)
ln=0 Used in NNCP, in lstm-compress is always = 0
fc=0 Used in NNCP, in lstm-compress is always = 0
sgd_opt=adam Used in NNCP, in lstm-compress v3a is always = adam
lr=5.000e-002 learn_rate=0.050 (=5.000e-002)
adam_lr=3.350e-003 adam_alpha_lr=0.0033 or 0.0034 (3.350e-003 is 0.00335, but adam_alpha_lr has a resolution
of 4 decimals, not 5, and we cannot set exactly 0.00335)
adam_beta1=0.025000 adam_beta1=0.0250
adam_beta2=0.999900 adam_beta2=0.9999
adam_eps=1.000e-006 adam_eps=0.00000100 (=1.000e-006)
mem=62.412K. Memory used view in Task Manager
These options are not in (5) because they are not in NNCP, they are unchanged between (5) and v3a.
init_range=0.2
seed=1
adam_alpha_t=0.000050
adam_beta1_t=adam_beta1
adam_beta2_t=adam_beta2