Closed pereverges closed 2 years ago
Replace and from_tensor functions do not work properly
For all the methods we opted for this design where the user has to provide the item that will be replaced such that they can first find the exact item from the approximate one using whatever kind of cleanup memory they desire. Otherwise we would have to either have a reference to an item memory in each data structure or we remove the approximate version which will introduce a lot of noise to the value hypervector. What do you think would improve this design? What do you have in mind?
What is the problem with the replace
and from_tensor
functions?
Can we make a lookup of the last element or first and then do the pop/popleft ourselves? We could also give the option of keeping track of a memory ourselves.
I am not able to make replace, actually replace a value. The from_tensors function gives me an error because of the dimension.
>>> hv = torchhd.random_hv(10, 10000)
>>> S = torchhd.structures.Sequence.from_tensor(hv)
>>> len(S)
0
>>> S.value
tensor([-2., -2., 0., ..., -2., -2., 4.])
the from_tensor
method doesn't give me a dimension error but there is a bug because the size of the sequence isn't set correctly. I will fix that.
Fixed both issues, will make a PR now.
>>> hv = torchhd.random_hv(10, 10000)
>>> S = torchhd.structures.Sequence.from_tensor(hv)
>>> len(S)
10
>>> S.value
tensor([4., 0., 4., ..., 0., 0., 0.])
>>> torchhd.functional.cosine_similarity(S[2], hv)
tensor([ 0.0077, -0.0160, 0.3011, -0.0088, -0.0022, 0.0140, 0.0063, 0.0093,
0.0094, -0.0055])
>>> S.replace(2, hv[2], hv[5])
>>> torchhd.functional.cosine_similarity(S[2], hv)
tensor([ 0.0011, -0.0180, -0.0102, -0.0068, 0.0024, 0.3229, 0.0038, 0.0244,
0.0104, -0.0119])
For the general design discussion on passing the previous value in data structures see #62
pop, popleft, replace avoid passing the tensor to pop or replace