Time used during callback #46
Replies: 2 comments 3 replies
-
You are right. There was a performance issue in calling the The issue was solved by:
Here is an example of using the the def callback_generation(ga_instance):
print("Fitness = {fitness}".format(fitness=ga_instance.best_solution(pop_fitness=ga_instance.last_generation_fitness)[1])) For the complete example, check the example.py script. I hope this solves your issue. As usual, you are welcome to ask for any feature or report any issue that comes to your mind. I will be happy to make them supported. Thanks, Rainer! |
Beta Was this translation helpful? Give feedback.
-
Now I know what is causing the longer valley part in the CPU-usage plot at the beginning of this topic. I tried mutation_type with random and adaptive in comparison and the characteristic valley above is related to the adaptive mode. Identical GA (with): adaptive random So it seems very clear that in my case the adaptive mode isn't by far that performant as random. The adaptive mode is based on a nice idea and might be neat to use in some cases, but it might come at some speed cost to consider. |
Beta Was this translation helpful? Give feedback.
-
I have made a larger test and found out that my callback function is not that ideal in aspects of performance.
It seems that I have 3 times heavy multicore processing introduced. One redundancy of which is the second call of
ga_instance.best_solution()
, which can and should be avoided. In the figure below one can see three plateaus.If I completely disable/uncomment
on_generation=callback
I get only one plateau. This results in a performance gain, which is quite significant again (1091.18 second(s) vs. 775.59 second(s)) .I was also taking the best solution to run my time series analysis again (recheck) to retrieve the result, which is needed (the basis) for calculating the actual fitness funtion.
Findings
It seems that
ga_instance.best_solution()
is causing a full execution on its own again, which might be unimportant in certain cases, where the needed performance/time for that is not significant. In my case it does not appear to be wise.It can also be seen, that the gene space of 8 genes with a total population (unique combinations) of 5 trillion 😵 is taking some time on its own each generation, but it is working.
Maybe it would be nice to have some sort of verbose-mode to see some printing of internal numbers to get a feeling about quantification/calculations under the hood. I'm not sure if these cascading processes can be speed enhanced, but it is interesting, that the multi-core part I do is not taking that much time compared to the overall cycle (in my destinctive example above).
Beta Was this translation helpful? Give feedback.
All reactions