Тёмный

Python Code of Simulated Annealing Optimization Algorithm 

Solving Optimization Problems
Подписаться 14 тыс.
Просмотров 30 тыс.
50% 1

In this video, I’m going to show you a general principle, a flowchart, and a Python code of Simulated Annealing Optimization Algorithm. In addition, I will test the performance of the Simulated Annealing Optimization Algorithm in solving both minimization and maximization problems with well-known benchmarks. You can download this Python code, and it is very easy to customize this Python code to solve your optimization problems in various fields.
Did you know that Simulated Annealing Optimization Algorithm is one of the top three most popular stochastic optimization algorithms for solving complex large scale optimization problems in various fields? Only Genetic Algorithm and Particle Swarm Optimization are more popular than Simulated Annealing Algorithm.
SUBSCRIBE to receive more videos on the topic of "Solving Optimization Problems", please click here: / @solvingoptimizationpr...
Python code: bit.ly/352MhBJ
HERE ARE 6 LISTS OF MY VIDEOS YOU MAY BE INTERESTED IN:
1. Optimization Using Genetic Algorithm:
• Optimization Using Gen...
2. Optimization Using Particle Swarm Optimization:
• Optimization Using Par...
3. Optimization Using Simulated Annealing Algorithm:
• Optimization Using Sim...
4. Optimization Using Optimization Solvers:
• Optimization Using Opt...
5. Optimization Using Matlab:
• Optimization Using Matlab
6. Optimization Using Python:
• Optimization Using Python
If you have any questions, please let me know by leaving a comment below.
About Me: learnwithpanda...
My Blog: learnwithpanda.com
My Facebook: bit.ly/36234ot
My LinkedIn: bit.ly/3bbth5e
Free Music from RU-vid Audio Library.
Thank you for watching - I really appreciate it :)
All of my videos on the topic of Solving Optimization Problems: #SolvingOptimizationProblems, #MySimulatedAnnealingAalgorithm, #MyPythonCode
© Copyright by Solving Optimization Problems. ☞ Do not Reup

Опубликовано:

 

24 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 53   
@sothearathmeng6182
@sothearathmeng6182 7 месяцев назад
Excuse me, I am learning from scratch about this algorithm. I want to know if is it suitable to apply this algorithm with alternative optimization to get convergence when we control many variables?
@SolvingOptimizationProblems
@SolvingOptimizationProblems 7 месяцев назад
Yes, it is possible to integrate other optimization algorithms, in these cases, we call them hybrid algorithms
@GunPoint932
@GunPoint932 3 года назад
how do we solve Placement in Integrated Circuits using Cyclic Reinforcement Learning and Simulated Annealing
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
You can update the objective function and constraints to solve your problems. The rest of the code can be kept the same.
@GunPoint932
@GunPoint932 3 года назад
@@SolvingOptimizationProblems which is the correct function that can solve this problem. Our agent is an Actor network that predicts the index of candidate block bc and the value of the action taken by the agent is calculated by the Critic network. The value function, V is further used to compute the advantage of the action taken using Generalised Advantage Estimation[20]. Together, the Actor and Critic networks constitute the policy network, which learns a sequence of actions to provide a good initialization for SA. We use PPO [21] to train the policy network. The RL agent runs r number of steps and r here is a hyperparameter. Now, the Sequence pair generated after r steps is taken as a starting point for SA. The SA runs for another s steps, which is also a hyperparameter. After r + s steps in every epoch, a global reward is obtained. We define the global reward rg as the difference between the costs of solution obtained after running the SA for s steps and the starting point Pseq generated after running the RL for r steps, which is represented by the following equation: rg = C(Pseq(r+s))−C(Pseq(r)) (1) The global reward conveys the efficacy of the Sequence Pair initialization to the RL as a feedback and serves as the approximated value of the rth (final) step Vr, as stated in Eqn. 2 and Eqn. 3, in an otherwise infinite action space [10]. Hence, the agent is encouraged to increase the global reward by learning to generate a better initialization Pseq(r) for SA after r steps of RL operations. The state is reset after every epoch. Since, the weights of our Actor-Critic Network are updated cyclically, we describe the framework as cyclic . r−1 ∞ r−1 Vt = Xγirli +Xγirli = Xγirli +Vr i=t i=r i=t (2) r−1 rg = Vt −Xγirli i=t (3)
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
Wow, it's too complex to me to understand your problem.
@amirrezasadeghi1954
@amirrezasadeghi1954 4 года назад
P = exp(- (E2-E1)/T) But according to your code P =exp(-E/(EA*T)) and EA=E so P = exp(-1/T) Please explain this I dont understand it
@amirrezasadeghi1954
@amirrezasadeghi1954 4 года назад
E2-E1 = E
@SolvingOptimizationProblems
@SolvingOptimizationProblems 4 года назад
Hi, there is no right or wrong in calculating P. We can adjust P to make it work better for particular problems. EA = E at iteration 1. After that EA will change, see the code line 61.
@BizouElf
@BizouElf 3 года назад
i tried changing the number of variables and the problem appears in line 26 can you help, please
@ashweenasundar
@ashweenasundar 3 года назад
Hye, have you check the space for each code, since python is quite sensitive in term of spacing.
@BizouElf
@BizouElf 3 года назад
@@ashweenasundar that's it I haven't touched that line it keeps showing list index out of range can you help please
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
Try to understand the code first and then you know how to change it
@BizouElf
@BizouElf 3 года назад
@@SolvingOptimizationProblems asking for help sort of tells you that I don't understand don't you think
@Norhther
@Norhther 3 года назад
@@BizouElf You are not showing any kind of effort in trying to understand the code, because you're not telling what do you don't understand or what you have tried, you are simply saying "hey, I don't know what to do". We are not here to do your work. Pay some respect, kid, and put some effort first.
@ashweenasundar
@ashweenasundar 3 года назад
How to solve the objective function involve a summation symbol? and if it's has more than one objective function, how we solve them?
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
The easiest way to deal with multi objective functions is to add them together with weight coefficients.
@ashweenasundar
@ashweenasundar 3 года назад
@@SolvingOptimizationProblems okay thank you
@dhwanishah3272
@dhwanishah3272 3 года назад
How to handle this algorithm for multivariable, by meaning three variables? and how changes in system can analyze?
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
Hi, to solve a problem with 3 variables, we need to update the following code lines: 10, 18. The rest can be kept the same.
@ashweenasundar
@ashweenasundar 3 года назад
I want to ask if during the iteration, the best solution that we obtained is exactly as upper/lower bound, means there is no randomization happens, why it is happen and how to solve it?
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
That might happen because of (1) characteristic of the problem and (2) the functions inside the SA do not work properly.
@ashweenasundar
@ashweenasundar 3 года назад
@@SolvingOptimizationProblems alright, thank you
@xuancuong7061
@xuancuong7061 Месяц назад
The link code has a problem
@RC-rk9em
@RC-rk9em 2 года назад
I think what you are outputting as the best_solution is really the current solution due to an array copy by reference instead of value issue. Also I am not clear why you are calculating E as (current_fitness - best_fitness) and not as (candidate_fitness - current fitness), which would be the relevant change for evaluating if a new step should be taken. Finally for the denominator of the probability term you have a T that decreases due to the cooling rate but you also have an EA term that depends on n. Can you tell me what the basis is for this choice as n will depend on the starting point and the stocahastic path? I have only seen a normalization constant beta used in the probability calculation that can be used to normalize function value differences between functions being optimized.
@SolvingOptimizationProblems
@SolvingOptimizationProblems 2 года назад
Many thanks for your useful comments.
@johntriantafillakis8548
@johntriantafillakis8548 Год назад
Hello. Nice content, just subscribed. How do i modify the code, in case i want to solve a simple Vehicle Routing Problem with 2 vehicles and no capacity restrictions? Thanks in advance
@SolvingOptimizationProblems
Great question! For 2 vehicles, we need to update the code chromosomes, objective function and the constraints. Will make a video as soon as possible. Many thanks!
@gastonamengual
@gastonamengual 4 года назад
Why did you decide to include the EA term? I’m curious because this is the only tutorial I saw that has included it. Thanks
@SolvingOptimizationProblems
@SolvingOptimizationProblems 4 года назад
That is to calculate the cooling temperature of simulated annealing. Maybe, different sources names EA differently
@asturiasdv7
@asturiasdv7 3 года назад
Hey nice video, you're great, man I hahve a question, whats is the point of convergence ? You earned a suscriber :3
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
We don't know the exact point of convergence
@michaelreynolds4268
@michaelreynolds4268 3 года назад
Instructional videos with music in the background instead of narration are not helpful. You should be explaining steps as you take them for beginners.
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
Noted. Many thanks for your suggestion!
@fengnanzhang1262
@fengnanzhang1262 4 года назад
How to make the results repeatable, how to use random number seeds
@SolvingOptimizationProblems
@SolvingOptimizationProblems 4 года назад
To use random seeds, modify the line 25 in the python code. I don't understand what you mean by "make the results repeatable"
@hulu8237
@hulu8237 3 года назад
@@SolvingOptimizationProblems i guess he means repeat have the same result ,stable
@oumaabdi5541
@oumaabdi5541 3 года назад
I need the complete code and sent an email. be careful me answer it
@SolvingOptimizationProblems
@SolvingOptimizationProblems 3 года назад
Yes, thank you
@Username-vp1sh
@Username-vp1sh 4 года назад
Thank you
@SolvingOptimizationProblems
@SolvingOptimizationProblems 4 года назад
Thank you for your interest.
@satyamsingh8786
@satyamsingh8786 4 года назад
Can I get help on performing integration using scipy on shape file ?
@SolvingOptimizationProblems
@SolvingOptimizationProblems 4 года назад
Sorry, I don't know how to do that.
@SkielCast
@SkielCast 4 года назад
Could you explain what EA means?
@SolvingOptimizationProblems
@SolvingOptimizationProblems 4 года назад
Hi, EA stands for elasticity of acceptance
@SkielCast
@SkielCast 4 года назад
@@SolvingOptimizationProblems do you have some literature to extend the idea? It is no where in Wikipedia
@SolvingOptimizationProblems
@SolvingOptimizationProblems 4 года назад
This is just a parameter used to calculate the cooling temperature. We can call it whatever we want
@amirrezasadeghi1954
@amirrezasadeghi1954 4 года назад
Hi Can you explain EA? and explain this EA = (EA * (n-1)+E)/n This means that EA =E always
Далее
Simulated Annealing
17:45
Просмотров 14 тыс.
He went ALL in 😭
00:12
Просмотров 1,8 млн
Simulated Annealing
31:00
Просмотров 11 тыс.
Genetic Algorithms Explained By Example
11:52
Просмотров 335 тыс.
Optimization - I (Simulated Annealing)
48:13
Просмотров 81 тыс.
He went ALL in 😭
00:12
Просмотров 1,8 млн