London | 25-SDC-July | Ali Qassab | Sprint 2 | Improve code with caches#48
London | 25-SDC-July | Ali Qassab | Sprint 2 | Improve code with caches#48AliQassab wants to merge 3 commits intoCodeYourFuture:mainfrom
Conversation
OracPrime
left a comment
There was a problem hiding this comment.
There are some serious inefficiencies with this attempt to make the code more efficient - worth revisiting to improve the efficiency
| return 0 | ||
|
|
||
| # Create cache key from current state | ||
| cache_key = (total, tuple(coins)) |
There was a problem hiding this comment.
converting the list to a tuple every time is going to be expensive. Also at each stage you're creating a new list, which will also be expensive. The new lists are always the same list, but starting later. Can you think of a way you could refactor the code so that it always uses the same tuple, and the recursion is achieved by passing a parameter indicating where the first usable entry in the tuple is? In that situation, how much can you simplify the key to the cache as well?
| result = fibonacci(n - 1) + fibonacci(n - 2) | ||
|
|
||
| cache[n] = result | ||
| return result |
There was a problem hiding this comment.
Refactored to use a single tuple with an coin_index parameter instead of creating new lists via slicing. This eliminates expensive tuple conversions and list slicing, and simplifies the cache key to (total, coin_index).
Learners, PR Template
Self checklist
Changelist
Added memoization caching to improve performance:
Both functions now run significantly faster while maintaining correct results.
Questions
Ready for review! All tests pass.