Open houxingxing opened 7 years ago
Could you solve your problem, @houxingxing ? I've been having it too. I'm trying to replicate their paper using the MNIST dataset, and I find that a random acquisition is as good as using any of the Bayesian acquisition functions (BALD, max entropy, variation ratios). I can't find why this could be.
can you give me some advises? I want to realize that applying traditional active learning method to cnn model, such as maximal entropy, but I fail.
[network] `
` active sampling function:
` def getData(proba,data,label,batch_data,batch_label,num,flag): tmpdata=np.empty((num,3,32,32),dtype='float32') tmplabel=np.empty((num,10),dtype='uint8') if num==batch_size: Class_Log_Probability = np.log2(proba) Entropy_Each_Cell = - np.multiply(proba ,Class_Log_Probability) Entropy = np.sum(Entropy_Each_Cell, axis=1) index=select_sort(Entropy,num,flag) else: index=get_index(flag) print(index)
the result: random methods is better. why?