attempted to collapse function layers in linear module, put sliding f…#115
attempted to collapse function layers in linear module, put sliding f…#115pavelkomarov merged 2 commits intomasterfrom
Conversation
…unction in utilities and gave it and a few other utils tests
| - **dxdt_hat** -- estimated derivative of x | ||
| """ | ||
| if params != None: # boilerplate backwards compatibility code | ||
| warn("""`params` and `options` parameters will be removed in a future version. Use `r`, |
There was a problem hiding this comment.
I've determined these print prettier if we + single-" strings.
| num_iterations = params[1] | ||
|
|
||
| kernel = utility._mean_kernel(window_size) | ||
| kernel = utility.mean_kernel(window_size) |
There was a problem hiding this comment.
These are actually called explicitly from other modules, so they're basically public. Changed names to reflect.
| (iterated_first_order, {'num_iterations':5}), (iterated_first_order, [5], {'iterate':True}), | ||
| (second_order, {}), | ||
| (lineardiff, {'order':3, 'gamma':5, 'window_size':10, 'solver':'CLARABEL'}), (lineardiff, [3, 5, 10], {'solver':'CLARABEL'}), | ||
| (lineardiff, {'order':3, 'gamma':5, 'window_size':11, 'solver':'CLARABEL'}), (lineardiff, [3, 5, 11], {'solver':'CLARABEL'}), |
There was a problem hiding this comment.
Window size needs to be odd now, or slide_function returns a ValueError. Internally a +1 was being done here anyway.
| assert params_1 == [6, 50] | ||
| assert params_2 == [4, 10] | ||
| assert params_1 == [2, 10] | ||
| assert params_2 == [2, 10] |
There was a problem hiding this comment.
There is something fishy going on with the optimizer tests, because my test results show my changes didn't hurt the performance of polydiff. In fact, my changes really didn't change its answers at all. Here are test error bounds for polydiff rom master:
[(7.021666937153402e-16,4.440892098500626e-16), (4.7431037974113776e-15,2.2447590091290175e-15), (0.17207780556984145,0.07461757391318558), (1.7611407009335684,1.3527833818133899), .
[(4.429775955663296e-15,1.7763568394002505e-15), (4.1994029975956965e-14,3.4638958368304884e-14), (0.17207780556984162,0.07461757391318535), (1.7611407009335693,1.3527833818133883), .
[(3.604379701537913e-15,1.7763568394002505e-15), (3.641041852535394e-14,3.019806626980426e-14), (0.17207780556984195,0.07461757391318535), (1.7611407009335762,1.3527833818133876), .
[(0.0040889903847632905,0.0020084285347912734), (0.2996093831637371,0.15854793239630816), (0.1711676933108151,0.07380862479044104), (1.6826386303795406,1.1942354494170848), .
[(0.1642611446380676,0.10722401786495617), (8.212910852645555,4.239201287556497), (0.1914916714106972,0.09178849455559435), (7.831325713420706,3.904257385010105), .
[(0.9172050736054983,0.5852499335551089), (147.08949276192166,145.3972707667044), (0.9868557348574576,0.6471894444344317), (148.3864607177945,146.75005414851773), .
and from this branch:
[(3.2406343207394936e-15,6.661338147750939e-16), (5.821032769018095e-15,4.013945104655408e-15), (0.1720778055698414,0.07461757391318558), (1.761140700933566,1.352783381813388), .
[(5.2451429026846145e-15,3.552713678800501e-15), (5.2374763832272693e-14,4.196643033083092e-14), (0.1720778055698412,0.07461757391318535), (1.7611407009335693,1.352783381813392), .
[(3.99371774508882e-15,2.6645352591003757e-15), (5.33440070537681e-14,4.618527782440651e-14), (0.1720778055698414,0.07461757391318535), (1.7611407009335731,1.3527833818133868), .
[(0.004088990384765382,0.002008428534791218), (0.29960938316374264,0.15854793239630416), (0.1711676933108144,0.07380862479044037), (1.682638630379538,1.1942354494170857), .
[(0.16426114463807995,0.1072240178649313), (8.212910852645468,4.239201287556277), (0.19149167141071077,0.09178849455560811), (7.831325713420657,3.9042573850098847), .
[(0.9172050736054965,0.5852499335551062), (147.08949276192163,145.39727076670437), (0.9868557348574571,0.6471894444344255), (148.3864607177945,146.75005414851773), .
So why should there be a difference from the optimizer's end? I'll solve this when I address the optimizer. Beyond the scope of this PR.
|
|
||
| if smoothing_win > len(x)-1: | ||
| smoothing_win = len(x)-1 | ||
| window_size = np.clip(window_size, polynomial_order + 1, len(x)-1) |
There was a problem hiding this comment.
np.clip achieves multiple things at once.
| return A, np.array(C.value) | ||
|
|
||
|
|
||
| def __integrate_dxdt_hat_matrix__(dxdt_hat, dt): |
There was a problem hiding this comment.
This one can be one-lined in the caller.
|
|
||
|
|
||
| def lineardiff(x, dt, params=None, options=None, order=None, gamma=None, window_size=None, | ||
| sliding=True, step_size=10, kernel='friedrichs', solver=None): |
There was a problem hiding this comment.
sliding is an unnecessary keyword in the new interface, because we can infer whether we're sliding from the presence of a window_size, so I've removed it.
|
The PR only accomplishes things we already talked about, so I'm taking initiative and being a bit unilateral. |
…unction in utilities and gave it and a few other utils tests