diff options
author | David Doan <daviddoan@davids-mbp-3.devices.brown.edu> | 2022-05-09 00:25:39 -0400 |
---|---|---|
committer | David Doan <daviddoan@davids-mbp-3.devices.brown.edu> | 2022-05-09 00:25:39 -0400 |
commit | a0870ac3f1f84278c5b9fe7f78f6b1af1d1f33e9 (patch) | |
tree | 440f9c17f23042e32c7c495c49d36bb838fcd73a /hyperparameters.py | |
parent | 18f1f7bddcb63502120581f3fa24b980559ffa9f (diff) |
clean and refactor code for submission
Diffstat (limited to 'hyperparameters.py')
-rw-r--r-- | hyperparameters.py | 12 |
1 files changed, 7 insertions, 5 deletions
diff --git a/hyperparameters.py b/hyperparameters.py index ac2beda8..180eaf85 100644 --- a/hyperparameters.py +++ b/hyperparameters.py @@ -9,19 +9,21 @@ Number of epochs. If you experiment with more complex networks you might need to increase this. Likewise if you add regularization that slows training. """ -num_epochs = 200 +num_epochs = 500 """ A critical parameter that can dramatically affect whether training succeeds or fails. The value for this depends significantly on which optimizer is used. Refer to the default learning rate parameter """ -learning_rate = 1e2 +learning_rate = .002 + +beta_1 = .99 + +epsilon = 1e-1 momentum = 0.01 alpha = .05 -beta = 5 -# alpha = 1e-5 -# beta = 1e-2 +beta = 5 |