Open sbhakat opened 6 years ago
weird, i have it working on my machine. try the following for the color instead?
c=np.array(new_y.data.tolist())[:,0]
Cool It is working. Is this a numpy array type of error?
One more possibly small error. When using the following command
from helper_func.helper import *
Got an error like
ModuleNotFoundError Traceback (most recent call last)
you have to be in the same folder to be able to call that. It wont work for arbitrary scripts.
What you mean by same folder? Same folder of in my computer path or according to the Github file structure? I am in the same folder of my computer.
Sorry okay got it
Okay now I put all the files in the correct path but when I was trying to execute the following command I got an error like
if os.path.isfile("./autoencoder.net"): model = torch.load("./autoencoder.net") else: model = AutoEncoder()
ModuleNotFoundError Traceback (most recent call last)
Sorry again. I have trouble understanding the following things
# Hyper Parameters
input_size = 4
hidden_size = 8
# The output class becomes our Plumed collective variable(CV)
num_classes = 1
What is the Hyper parameter? How can one set the values in each case.
What is the following part doing?
`class Encoder(nn.Module): def init(self): super(Encoder, self).init() self.df = df self.hidden_size = hidden_size self.input_size = input_size self.num_classes = num_classes
self.l1 = nn.Linear(input_size, hidden_size)
self.l2 = nn.Sigmoid()
self.l3 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.l1(x)
out = self.l2(out)
out = self.l3(out)
return out
class Decoder(nn.Module): def init(self): super(Decoder, self).init() self.l1 = nn.Linear(num_classes, hidden_size) self.l2 = nn.Sigmoid() self.l3 = nn.Linear(hidden_size, input_size)
def forward(self, x):
out = self.l1(x)
out = self.l2(out)
out = self.l3(out)
return out
class AutoEncoder(nn.Module): def init(self): super(AutoEncoder, self).init() self.fc1 = Encoder() self.fc2 = Decoder()
def forward(self, x):
return self.fc2(self.fc1(x))`
I tried to train the autoencoder with the following script
`optimizer = optim.Adam(model.parameters(), lr=0.5) loss_list=[] for epoch in range(5): for i in rnd_sampler: optimizer.zero_grad()
x = Variable(torch.from_numpy(np.vstack(i)))
y=model(x)
output = loss(y,x)
output.backward()
loss_list.extend(output.data.tolist())
optimizer.step()`
But it gives an error like
`--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last)
ModuleNotFoundError: No module named 'pandas.indexes'
That is a pandas issue, look into updating pandas to 0.19 i believe. google might be able to do it as well.
What is the Hyper parameter? How can one set the values in each case.
They specify the network size, shape and how its trained. For example input_size is the number of input features. hidden size is the number of hidden nodes per layer. The pytorch website will likely have more data.
RuntimeError: size mismatch, m1: [500 x 2], m2: [4 x 8] at /opt/conda/conda-bld/pytorch-cpu_1518282373170/work/torch/lib/TH/generic/THTensorMath.c:1434`
What is your input number of features.For example, alanine is 4 features which are sin cosine transform of the backbone dihedrals \phi and \psi. In your case there is a mismatch between the vectors here so pytorch is complaining about that.
ps: I ended up writing a paper on how to use auto encoder and its variants for enhanced sampling. The following github is a bit more complete since we wrote a lot more code for it: https://github.com/msultan/vde_metadynamics
Okay got it I had two features hence it was a mismatch.
However I didn't understand the exact meaning of #
# The output class becomes our Plumed collective variable(CV)
num_classes = 1
When I am trying to render the plumed file I got an error like
ValueError Traceback (most recent call last)
Yea, so I didn't write the code to transform all possible featurizers from MSMBuilder to Plumed. I think it currently does Dihedrals, alpha carbon dihedrals, and contact distances. I think it might also do angles but definitely not center of mass. One reason being that I am fairly sure, though could be wrong, that the COM calculation in Plumed is different than in Mdtraj/MSMbuilder so I need to write a new featurizer for MSMbuilder first to make everything compatible. Unfortunately, all the packages use slightly different definitions of everything which makes writing these transforms rather cumbersome.
Distances or dihedrals are pretty alright, and I implemented the closest heavy atom distance in mdtraj to be the same as Plumed for that reason. If you want to work on it yourself, please feel free to file a pull request !
My case it is just two features both are simple CA distances. So how should I render?
Ohh, try using the VDE_Metadynamics module. I think that has the ca distances.
Sorry about the code fragmentation on this, this repo was more of a weekend project to try to see if I could teach Plumed basic neural networks.
print(df) gives the following
atominds featuregroup featurizer otherinfo resids resnames \
0 [[534], [1203]] ca Contact 20 [35, 79] [ASP, VAL]
1 [[3336], [4558]] ca Contact 20 [215, 293] [ASH, LEU]
resseqs
0 [36, 80]
1 [216, 294]
Ha ha I am laughing on myself. This is the NOOBEST question. So I installed VDE Metadynamics and all the render files are in this path
~/miniconda2/envs/py36/lib/python3.6/site-packages/vde_metadynamics-0.1-py3.6.egg/vde_metadynamics
How can I define the path inside notebook to import the .py scripts.
Try walking through the following notebook
https://github.com/msultan/vde_metadynamics/blob/master/examples/alanine/ala_res.ipynb
But that is not the problem seems like my installation paath is not catching up
ModuleNotFoundError Traceback (most recent call last)
weird, can you try
python setup.py develop
in your folder, and let me know if that works? Also you might have to restart the ipython notebook kernel.
Hi,
During executing the following command
scatter(plot_feat[:,0],plot_feat[:,1],c=new_y.data.tolist(),cmap='jet')
xlim([-np.pi,np.pi])
ylim([-np.pi,np.pi])
cb
= plt.colorbar()xlabel(r'$\phi$')
ylabel(r'$\psi$')
cb.set_label("Neural
Network Projection")I got an error like
TypeError Traceback (most recent call last) ~/miniconda2/envs/py36/lib/python3.6/site-packages/matplotlib/colors.py in to_rgba(c, alpha) 131 try: --> 132 rgba = _colors_full_map.cache[c, alpha] 133 except (KeyError, TypeError): # Not in cache, or unhashable.
TypeError: unhashable type: 'list'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last) ~/miniconda2/envs/py36/lib/python3.6/site-packages/matplotlib/axes/_axes.py in scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, verts, edgecolors, **kwargs) 3985 # must be acceptable as PathCollection facecolors -> 3986 colors = mcolors.to_rgba_array(c) 3987 except ValueError:
~/miniconda2/envs/py36/lib/python3.6/site-packages/matplotlib/colors.py in to_rgba_array(c, alpha) 232 for i, cc in enumerate(c): --> 233 result[i] = to_rgba(cc, alpha) 234 return result
~/miniconda2/envs/py36/lib/python3.6/site-packages/matplotlib/colors.py in to_rgba(c, alpha) 133 except (KeyError, TypeError): # Not in cache, or unhashable. --> 134 rgba = _to_rgba_no_colorcycle(c, alpha) 135 try:
~/miniconda2/envs/py36/lib/python3.6/site-packages/matplotlib/colors.py in _to_rgba_no_colorcycle(c, alpha) 188 if len(c) not in [3, 4]: --> 189 raise ValueError("RGBA sequence should have length 3 or 4") 190 if len(c) == 3 and alpha is None:
ValueError: RGBA sequence should have length 3 or 4
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)