Ticket #998 (closed bug: fixed)
Test Learners' memory consumption
|Reported by:||matija||Owned by:||ales|
When I used Test Learners widget to evaluate Logistic Regression on some classification problem with 40k instances and 16 attributes, the memory consumption was reasonably low for 5-fold cross validation, but it increases dramatically for 100-fold cross validation. I tried the latter after leave-one-out filled all my RAM and swap (over 10 gigabytes). I intuitively see no reason why this should happen; I believe Test Learners could abandon a built model before building the next one (for the next fold). Is there some caching going on in Logistic Regression or Test Learners? Or is it the Python's garbage collector that does not perform well in this case?
I didn't try to run evaluation in scripting, so I don't know if it's a widget or backbone problem.
Aleš, if you don't have time for this, you may assign it to me. CCed is Anže who has recently been working on cross-validation and may have a clue or two about its inner workings.