Conversation
| weights = np.zeros((int(np.ceil(len(x)/stride)), len(x))) # Could be more space efficient | ||
| x_hats = np.zeros(weights.shape) | ||
| dxdt_hats = np.zeros(weights.shape) | ||
| x_hat = np.zeros(x.shape) |
There was a problem hiding this comment.
These are all just
| min(len(x), midpoint + half_window_size + 1)) # +1 because slicing is exclusive of end | ||
| kslice = slice(max(0, half_window_size - midpoint), | ||
| min(len(kernel), len(kernel) - (midpoint + half_window_size + 1 - len(x)))) | ||
| start = max(0, midpoint - half_window_size) |
There was a problem hiding this comment.
Making start and end explicit variables turns out to be really helpful later.
| weights[i, window] = kernel if kslice.stop - kslice.stop == len(kernel) else kernel[kslice]/np.sum(kernel[kslice]) | ||
| if pass_weights: kwargs['weights'] = weights[i, window] | ||
| kstart = max(0, half_window_size - midpoint) | ||
| kend = kstart + (end - start) |
There was a problem hiding this comment.
This is a way simpler formula than what I had before, though it does the same thing. It's nice to be able to use start and end without having to get the properties from the window.
| window = slice(max(0, midpoint - half_window_size), | ||
| min(len(x), midpoint + half_window_size + 1)) # +1 because slicing is exclusive of end | ||
| kslice = slice(max(0, half_window_size - midpoint), | ||
| min(len(kernel), len(kernel) - (midpoint + half_window_size + 1 - len(x)))) |
There was a problem hiding this comment.
This endpoint formula was a little crazy. Can be done in a simpler expression.
| window = slice(start, end) | ||
|
|
||
| # weights need to be renormalized if running off an edge | ||
| weights[i, window] = kernel if kslice.stop - kslice.stop == len(kernel) else kernel[kslice]/np.sum(kernel[kslice]) |
There was a problem hiding this comment.
I was subtracting .stop from .stop here, rather than .start from .stop, so the else condition was always getting executed. That's fine, because normalization isn't bad, but I wasn't saving the compute I was hoping to save with this conditional.
| dxdt_hat = np.sum(weights*dxdt_hats, axis=0) | ||
| # run the function on the window and add weighted results to cumulative answers | ||
| x_window_hat, dxdt_window_hat = func(x[window], dt, *args, **kwargs) | ||
| x_hat[window] += w * x_window_hat |
There was a problem hiding this comment.
This is the real insight. No need to add to a big list or array like we were before. Just take the weighted running sum and keep track of cumulative weights for normalization at the end.
| weight_sum[window] += w # save sum of weights for normalization at the end | ||
|
|
||
| return x_hat, dxdt_hat | ||
| return x_hat/weight_sum, dxdt_hat/weight_sum |
There was a problem hiding this comment.
Bam, normalization is much easier.
To address #146, I asked Chat how to make the function more efficient, and it had some good ideas. Got some strange data reference issue when I copied in some of the code, though, because it was modifying the kernel in place, which is a no no. Some ablation tests and more conversation with Chat eventually got to the bottom of it. It's way better now, not only more efficient but easier to read and interpret.