ViewTube

ViewTube
Sign inSign upSubscriptions
Filters

Upload date

Type

Duration

Sort by

Features

Reset

130 results

Related queries

adam optimizer

cross entropy loss

stock price prediction using machine learning

optimization algorithms

adagrad

model based reinforcement learning

batch normalization

stochastic gradient descent

backpropagation

vtrl optimizer

learning rate

regularization

Kwangjun Ahn
[ICML 2024] Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise

Presentation video of the paper "Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise" by ...

14:16
[ICML 2024] Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise

918 views

1 year ago

DataHack
DataTalks #7 - FTRL Formulations For Online Learning by Kristian Holsheimer

Speaker: Kristian Holsheimer, Booking.com Title: FTRL Formulations For Online Learning This talk was given at the seventh ...

48:55
DataTalks #7 - FTRL Formulations For Online Learning by Kristian Holsheimer

2,179 views

6 years ago

Lecctron
I Downloaded “Free FPS” On My Laptop…

Download Hone for free here! : https://hone.gg/ PC upgrades are expensive, so today I'm showing you guys a free app that can ...

6:21
I Downloaded “Free FPS” On My Laptop…

349,721 views

1 year ago

Justin Chang
Follow The Regularized Leader: Minimizing Regret With Math (#SoME3)

Follow The Regularized Leader (FTRL) is a regret minimization algorithm for adversarial settings. It is used by many companies.

37:39
Follow The Regularized Leader: Minimizing Regret With Math (#SoME3)

1,583 views

2 years ago

Molly Rocket
Refterm Lecture Part 1 - Philosophies of Optimization

https://www.kickstarter.com/projects/annarettberg/meow-the-infinite-book-two Live Channel: https://www.twitch.tv/molly_rocket Part ...

18:41
Refterm Lecture Part 1 - Philosophies of Optimization

80,262 views

4 years ago

DeepBean
Optimization for Deep Learning (Momentum, RMSprop, AdaGrad, Adam)

Here we cover six optimization schemes for deep neural networks: stochastic gradient descent (SGD), SGD with momentum, SGD ...

15:52
Optimization for Deep Learning (Momentum, RMSprop, AdaGrad, Adam)

124,586 views

2 years ago

OpenMDAO
Multiobjective optimization

Multiobjective optimization is somewhat of a misnomer -- you actually have to have predefined weightings for each of the ...

5:49
Multiobjective optimization

22,180 views

3 years ago

CodeEmporium
Optimizers - EXPLAINED!

From Gradient Descent to Adam. Here are some optimizers you should know. And an easy way to remember them. SUBSCRIBE ...

7:23
Optimizers - EXPLAINED!

143,735 views

5 years ago

SCION Instruments
How to use the Method Optimisation Mode in the HT3 Software

Learn how to use the Method Optimisation Mode (M.O.M) within the HT3 Software to streamline your workflow and improve your ...

3:26
How to use the Method Optimisation Mode in the HT3 Software

0 views

8 days ago

Mısra Turp
Which Loss Function, Optimizer and LR to Choose for Neural Networks

Neural Networks have a lot of knobs and buttons you have to set correctly to get the best possible performance out of it. Although ...

4:59
Which Loss Function, Optimizer and LR to Choose for Neural Networks

9,875 views

3 years ago

MATLAB
What Is Linear Quadratic Regulator (LQR) Optimal Control? | State Space, Part 4

Check out the other videos in the series: https://youtube.com/playlist?list=PLn8PRpmsu08podBgFw66-IavqU2SqPg_w Part 1 ...

17:24
What Is Linear Quadratic Regulator (LQR) Optimal Control? | State Space, Part 4

359,623 views

6 years ago

Optimizers

5.38K subscribers

Keifer
Increasing code performance with LTO

Explores potential performance improvements using link time optimization (LTO) by benchmarking various ...

5:44
Increasing code performance with LTO

11,181 views

3 weeks ago

Gopher Academy
GopherCon 2025: Profiling Request Latency with Critical Path Analysis - Felix Geisendörfer

Go ships with great tools for diagnosing performance bottlenecks, with pprof's CPU profiler being perhaps the most well-known ...

25:15
GopherCon 2025: Profiling Request Latency with Critical Path Analysis - Felix Geisendörfer

1,272 views

12 days ago

Folker Hoffmann
Understanding scipy.minimize part 1: The BFGS algorithm

A description of how quasi Newton algorithms in general, and in special the BFGS algorithm work. Animations are made with the ...

12:58
Understanding scipy.minimize part 1: The BFGS algorithm

26,139 views

2 years ago

Parlons IA avec Louis-François Bouchard
Fine-Tuning by Reinforcement: Discover RFT by OpenAI

Check out my new 10-hour AI video course: https://www.louisbouchard.ca/formation-ia-de-10-heures ►Our RAG course: https ...

12:34
Fine-Tuning by Reinforcement: Discover RFT by OpenAI

1,064 views

8 months ago

Frenetic
Frenetic Core Optimizer™

The Core Optimizer is a new solution offered by Frenetic Online for the enhancement of the design magnetic components.

4:44
Frenetic Core Optimizer™

603 views

3 years ago

ByteQuest
RMSProp Optimizer Visually Explained | Deep Learning #12

In this video, you'll learn how RMSProp makes gradient descent faster and more stable by adjusting the step size for every ...

5:42
RMSProp Optimizer Visually Explained | Deep Learning #12

192 views

3 weeks ago

Trelis Research
Fine tuning Optimizations - DoRA, NEFT, LoRA+, Unsloth

ADVANCED-fine-tuning Repo: https://trelis.com/advanced-fine-tuning-scripts/ ➡️ ADVANCED-inference Repo: ...

33:26
Fine tuning Optimizations - DoRA, NEFT, LoRA+, Unsloth

10,047 views

1 year ago

Optimizer

2.38K subscribers