Implement cache spike vector for phy sorting reader#4579
Conversation
|
This LGTM! @grahamfindlay can you take a look? |
|
Eh, sorry I haven't had time to pick #4502 back up. As I understand it, this makes 2 changes:
My only real concern would be this, in the new # the order for 2 units with the same sample_index is not garanty here but should be OKIf it's okay to not sort the co-temporal spikes, why were we doing it in the first place? Should we also remove this guarantee from the parent class's implementation? In my last comment on #4502 I mentioned that I had found a way to do the lexsorting much more quickly than In any case, I don't see any reason that should block this. This looks good and is a big immediate win (assuming it really is okay to break the spike vector sortedness promise)! |
Try to implement the spike vector caching for phy reader in a faster way than #4502
@alejoe91 @grahamfindlay : the implementaion of the caching was slow mainly dur the final lexsort which is normaly not need.
Perfs for the file provide by Graham: (400Mspikes 342 units)
read_phy() + sorting.to_spike_vector() + sorting.to_reordered_spike_vector()
I will work in a sperarte PR to improve the select_unit() and caching.
Important note for us: we should reread all the sorting reader and make the same trick when it is necessary.