Тёмный

Physics-Informed Neural Networks with MATLAB - Conor Daly | Deep Dive Session 5 

Jousef Murad | Deep Dive
Подписаться 36 тыс.
Просмотров 2,9 тыс.
50% 1

Опубликовано:

 

27 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 5   
@JousefM
@JousefM Месяц назад
🌎 GitHub Repository: github.com/matlab-deep-learning/physics-informed-neural-networks-with-matlab-live-coding-session
@adershm3510
@adershm3510 3 дня назад
Can this method scale up if the input features are higher than one (considering the summation in the gradient calculation)? Also, how will the training loop change if I want to train it in batches?
@jamggurogi42
@jamggurogi42 2 месяца назад
Thanks for this session, I've done several attempts at understanding PINNs over the last few years, this is the closest I got to feeling like I'm nearly there. I have a few remaining questions; if the speaker would have time to answer these, I would be very grateful! First and foremost, I'm still a little confused on what you are actually training on in the PINN loss term. I expected that, for some input(s) t, you would compute the next time step or predict the target variable x once with the model, and once using the differential equation you want to enforce in your model, allowing you to use the difference as residuals. However, what I see in the function seems to be the model outputs, first gradient (with respect to which function? The ODE was not defined or passed anywhere yet, right?) and second gradient, all multiplied by some hyperparameters, which I would not intuitively flag as residuals. It is not immediately clear to me what this is actually doing, but it certainly seems to work. I'm sure I'm missing some key point, but I'm not sure what it is 😅 On a less important note, what is the purpose of connecting 128 nodes to a single input? Wouldn't the first hidden layer contain essentially 128 (fully dependent) differently scaled versions of the exact same scalar input information? Or perhaps this was for demonstration purposes only? Finally, I've also heard of NNs used to predict the "hyperparameters" (e.g., constants/properties that are usually unknown and different per scenario) of differential equations and using the equations to perform the actual computation between input and output. Would these still be considered PINNs, that could be implemented in a similar framework, or would these be a related but different idea requiring different techniques to implement? Thanks!
@ConorDaly-q2y
@ConorDaly-q2y 2 месяца назад
Thanks for these great questions! The model we trained in this demo takes as input the time t and predicts the mass-spring-damper system displacement x(t) -- i.e. the model predicts the displacement at the *current* time. So the PINNs loss term simply ensures that the predicted displacement x(t) satisfies the governing second order differential equation. It would require a slightly different formulation to create a model which forecasts the displacement -- i.e. a model which takes as input the time t and predicts x(t+1). The choice of 128 hidden units in the hidden layer is essentially a hyperparameterization choice. You are absolutely right that, essentially, the activations of that first hidden layer will consist of 128 differently scaled (and biased) representations of the input: h_i = w_i*t + b_i. I suppose it's part of the beauty of neural networks that stochastic gradient descent manages to find w_i,b_i so that the activations are useful for the task at hand. RE predicting hyperparameters, yes I'd say that's still PINNs! I would call a formulation like the one you've described an inverse problem. Here's an example, where we solve for the thermal diffusivity of the heat equation gievn some known heat distribution: github.com/matlab-deep-learning/Inverse-Problems-using-Physics-Informed-Neural-Networks-PINNs. The techniques are essentially the same as what's covered in the demo here.
@jamggurogi42
@jamggurogi42 Месяц назад
@ConorDaly-q2y thanks a lot for the clarifications, and for the reference to other interesting work! It's very useful :)
Далее
Думайте сами блин
18:15
Просмотров 633 тыс.
The TRIPLE FOLDING phone has a Problem.
12:54
Просмотров 1,9 млн
Claude has taken control of my computer...
4:37
Просмотров 886 тыс.
The Most Important Algorithm in Machine Learning
40:08
Просмотров 489 тыс.