杏耀注册登录中国站

您好,欢迎访问我们的网站,我们将竭诚为您服务!

Optimizer

2024-03-11 12:26分类: 茶道 阅读:

 

>>> # train on default dynamic graph mode
>>> import paddle
>>> import numpy as np
>>> emb = paddle.nn.Embedding(10, 3)

>>> ## example1: LRScheduler is not used, return the same value is all the same
>>> adam = paddle.optimizer.Adam(0.01, parameters = emb.parameters())
>>> for batch in range(10):
...     input = paddle.randint(low=0, high=5, shape=[5])
...     out = emb(input)
...     out.backward()
...     print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.01
...     adam.step()
Learning rate of step0: 0.01
Learning rate of step1: 0.01
Learning rate of step2: 0.01
Learning rate of step3: 0.01
Learning rate of step4: 0.01
Learning rate of step5: 0.01
Learning rate of step6: 0.01
Learning rate of step7: 0.01
Learning rate of step8: 0.01
Learning rate of step9: 0.01

>>> ## example2: StepDecay is used, return the scheduled learning rate
>>> scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1)
>>> adam = paddle.optimizer.Adam(scheduler, parameters = emb.parameters())
>>> for batch in range(10):
...     input = paddle.randint(low=0, high=5, shape=[5])
...     out = emb(input)
...     out.backward()
...     print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05...
...     adam.step()
...     scheduler.step()
Learning rate of step0: 0.5
Learning rate of step1: 0.5
Learning rate of step2: 0.05
Learning rate of step3: 0.05
Learning rate of step4: 0.005000000000000001
Learning rate of step5: 0.005000000000000001
Learning rate of step6: 0.0005000000000000001
Learning rate of step7: 0.0005000000000000001
Learning rate of step8: 5.000000000000001e-05
Learning rate of step9: 5.000000000000001e-05

>>> # train on static graph mode
>>> paddle.enable_static()
>>> main_prog = paddle.static.Program()
>>> start_prog = paddle.static.Program()
>>> with paddle.static.program_guard(main_prog, start_prog):
...     x = paddle.static.data(name='x', shape=[None, 10])
...     z = paddle.static.nn.fc(x, 100)
...     loss = paddle.mean(z)
...     scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1)
...     adam = paddle.optimizer.Adam(learning_rate=scheduler)
...     adam.minimize(loss)

>>> exe = paddle.static.Executor()
>>> exe.run(start_prog)
>>> for batch in range(10):
...     print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05->0.005...
...     out = exe.run(main_prog, feed={'x': np.random.randn(3, 10).astype('float32')})
...     scheduler.step()
Learning rate of step0: 0.5
Learning rate of step1: 0.5
Learning rate of step2: 0.05
Learning rate of step3: 0.05
Learning rate of step4: 0.005000000000000001
Learning rate of step5: 0.005000000000000001
Learning rate of step6: 0.0005000000000000001
Learning rate of step7: 0.0005000000000000001
Learning rate of step8: 5.000000000000001e-05
Learning rate of step9: 5.000000000000001e-05

郑重声明:喝茶属于保健食品,不能直接替代药品使用,如果患有疾病者请遵医嘱谨慎食用,部分文章来源于网络,仅作为参考,如果网站中图片和文字侵犯了您的版权,请联系我们处理!

上一篇:在电竞大神的心尖放大招须弥花著 更新时间 2023-01-08 22:49:37

下一篇:抖音软件正版下载

相关推荐

关注我们

    杏耀注册登录中国站
返回顶部

平台注册入口