Support for random walk kernel#29
Conversation
| @@ -0,0 +1,14 @@ | |||
| # Author: Henry Moss & Ryan-Rhys Griffiths | |||
There was a problem hiding this comment.
You can change the author name to yours!
There was a problem hiding this comment.
Haha was waiting for permission
| # Author: Henry Moss & Ryan-Rhys Griffiths | ||
| """ | ||
| Molecule kernels for Gaussian Process Regression implemented in GPflow. | ||
| """ |
There was a problem hiding this comment.
Shall we change this module-level docstring?
| @@ -0,0 +1,102 @@ | |||
| # Author: Henry Moss & Ryan-Rhys Griffiths | |||
There was a problem hiding this comment.
Having a kernel_modules directory is probably a good idea.
There was a problem hiding this comment.
I'm happy for it to have another (less clumsy) name but yeah I think it was the only way for me to implement the kernel without contaminating pre-existing code
|
|
||
| from GP.kernel_modules.random_walk import RandomWalk | ||
|
|
||
| sys.path.append('/Users/juliusschwartz/Mystuff/FlowMO') |
There was a problem hiding this comment.
I hope so xD This was absolutely a hack and I completely forgot that I'd left a hardcoded file path in here. I'd definitely get this PR rejected at work
| from .kernel_utils import normalize | ||
|
|
||
|
|
||
| class RandomWalk(gpflow.kernels.Kernel): |
There was a problem hiding this comment.
Should we have some documentation for the class?
There was a problem hiding this comment.
Definitely. I suppose I hoped for the main logic to be looked at first but honestly I should have made a separate feature branch and done multiple PRs onto that (one for the logic, one for documentation etc)
|
I'll just add Leo to the review since he coded up the random walk kernel in GProTorch. |
Good idea (I don't seem to be able to add people myself) |
| :param X: array of N graph objects (represented as adjacency matrices of varying sizes). | ||
| :return: N x 1 array. | ||
| """ | ||
| return tf.linalg.tensor_diag_part(self.K(X)) |
There was a problem hiding this comment.
Couldn't think of anything smarter than implementing this function like this
Converting the work done in https://github.com/leojklarner/GProTorch/blob/kernels/gprotorch/kernels/graph_kernels/random_walk.py
from pytorch to tensorflow and using https://www.jmlr.org/papers/volume11/vishwanathan10a/vishwanathan10a.pdf as a reference.
Have only implented eigendecomposition approach so far (meaning that one test for which GraKel uses Conjugate Gradient Descent only passes if
method_type="baseline"is specified as an argument to the constructor of GraKel's random walk kernel)Tests with non-null values of
pdon't pass although I suspect this might be due to a bug in GraKel (see ysig/GraKeL#71)