Closed NickShahML closed 9 years ago
Friend, I could take your money and that would be super easy. But here is one thing for free: DBNs are somewhat outdated (they're 2006 stuff). Check the dates of articles saying Google, Facebook and MS use DBNs. You can do much better with more modern architectures, also:
PS to Keras devs: Sorry for blocking the easy money guys, but I had to say the truth.
@EderSantana Thank you for your feedback. You gave me a good laugh.
I apologize as I'm pretty new to deep learning. Basically, my goal is to read all of Wikipedia and make a hierarchy of topics. For example, dogs and cats are under the "animal" category and stars and planets are under the "astronomy" category. It would generate these topics on its own. The only input data you give is thousands of articles from Wikipedia.
I thought DBN's would be the best strategy to tackle this task due their ability to find deep hierarchical structures.
I'm not quite sure if this is the best place to ask this type of question. Is there perhaps a better forum for this?
Regardless, Keras is amazing. I think DBN's went out of style in 2006, but recently, I think they have resurfaced. I'm reading many papers from 2014, and 2015 saying that they are being used for voice recognition and more (http://www.aclweb.org/anthology/U14-1017). However, I could be misunderstanding this.
Google, Facebook, and Microsoft all use them
I assure you they do not. DBNs used to be a pet idea of a few researchers in Canada in the late 2000s. These people now work for a large Silicon Valley company and they haven't published anything about DBNs in a long time.
but recently, I think they have resurfaced. I'm reading many papers from 2014, and 2015 saying that they are being used for voice recognition
Some researchers or PhD students are bound to keep experimenting with them occasionally. But if you want an overview of what a state of the art voice recognition system uses, looks at: http://arxiv.org/abs/1507.06947 (doable with Keras).
@fchollet, thanks for pointing me towards this article. I'm more interested in building hierarchies and trees, but I will do my research first. Appreciate your help.
@EderSantana Thanks for your info. I do have a question regarding the state-of-the-art. For example, I am dealing with a problem where there is a large database of images without tags. I.e. I couldn't use supervised learning. The images have structures in them judged from visual inspections, but it's hard to clearly define how each structure belongs to a certain class. So in this case, I want to use unsupervised techniques and hopefully at the end of 'pre-training' these networks give me some ideas on what are the common structures look like. Do you know what advances we have made in this direction? I hope I explained my situation clear enough.
@metatl try to extract features with a pretrained net and cluster the results.
Here is how to extract features using Deep Neural Networks with Python/Theano: http://sklearn-theano.github.io/auto_examples/plot_asirra_dataset.html#example-plot-asirra-dataset-py
You could also use sklearn for clustering.
@EderSantana This looks to be a supervised learning though…
the example is supervised, but you can change the classifier on top to a clustering alg.
@EderSantana I've never used sklearn pipeline before, but guessing from this code I see that it has classes that require both input and target. In the case of unsupervised learning there's no target at all.
In my research, I have a small set of images (on the order of 7000) of size 64X64. They are black and white. There is no label for the images. I am hoping to use some unsupervised learning algorithm to extract good feature representations of each image. In the end, once I got those compact feature representations, I want to do a clustering algorithm and group the images in a sensible way.
@metatl I'm also new to deep learning, but would like to give you some suggestions on image clustering and retrieving: G. Hinton used two-stage semantic hashing to generate binary codes for image:
@EderSantana Hi I'm new to deep learning as well. I also want to do unsupervised clustering of images. How about using convolutional autoencoder to encode the images and then use other clustering method, like k-means clustering to cluster the corresponding features?
@LeavesBreathe , how did you proceed in your idea of generating a topic hierarchy? I working on a similar idea atm.
@YMAsano I ended up using a variety of conv and RNN nets. There are many papers that address this topic though its not my complete focus right now so I can't really help you further.
Hi, I'm searching about implementation of DBM on TensorFlow and found this topic. I'm working on a project for medical image denoise, inputs are some images with high Poisson noise (thus solutions to deep Gaussian process like dropout may not work) and some part of the image is missing (due to limitation of geometry of sensors). To solve this problem, I want to use DBM, deep belief nets or something like these so that I can use stochastic model. There are some papers about DBN or Beyasian nets, as a summary, I want to ask following questions:
Looking forward for your reply, thanks.
@Hong-Xiang I suggest you take a look at Variational Auto-Encoders, they might be of your interest.. There are even some keras examples.
https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder_deconv.py
If people would have continued to think that neural networks are not worth it and kernel machines are the answer to everything, the new deep learning hype would probably not have happened and Keras would not exist. I think it is very sad, seeing now similar arguments here, again. People don't seem to learn from history. If it is that simple to implement it as @EderSantana said then there exists no real argument against it. I always thought that the concept of Keras is its usability and user-friendliness, but seeing this argumentation here makes me doubt.
@NickShahML thank you, Folk, I have to say, I agree with NickShahML. We need DBN for classification. Why DL4J guys eliminated it ? Why noLearn guys eliminated it ? Why SciKit learn did not implement it ? People say the DBN is good for general classification problems. If they do not give it to us , what should we use for this problem : I have a ECG dataset in hand (like bigger version of IRIS) resembles this one (just an example) : https://www.dropbox.com/s/v3t9k3wb6vmyiec/ECG_Tursun_Full_excel.xls?dl=0 here I want to implement at least 3 deep learning methods : 1-DBN, 2-CNN, 3-RNN to classify my data. I believe DBN would outperform rest two. (I am frustrated to see that deep learning is extensively used for Image recognition, speech recognition and other sequential problems; classification of biological / bio-informatic data area remains ignored /salient. why nobody cares about it? there is bias.) I believe DBN sort of classifier has great potential in both cardiovascular disease detection ( what algorithm IBM Watson uses?) and Biometric identification, don't you think so ?
@tursunwali @NickShahML @EderSantana Interesting discussion... I recently started working in "Deep learning". I have read most of the papers by Hinton et.al. , and I don't think RBM or DNN is outdated. It depends on what the end goal is. In unsupervised setting, the RBM/DNN greedy layer wise pertaining is essentially a fancy name for EM (expectation maximization) algorithm that is "neuralized" using function approximations. @EderSantana suggested to replace this with clustering techniques. Well, I don't know which one is better: clustering or EM algorithm. Both are unsupervised schemes, and either may perform well, depending on the context.
@NickShahML so did you finally find the DBM/RBM to be useful? How does it compare with clustering techniques?
could anyone point me to a simple explanation about the difference between DBN and MLP with AE? is the difference all about the stochastic nature of the RBM?
And why would anyone say stacked AE are outdated? I still see much value to it. @ # @EderSantana
@thebeancounter most of these networks are quite similar to each other. I would say that the names given to these networks change over period of time. They all seem the same to me. You could always make stochastic counterparts of deterministic ones.
@rahulsingh1288 could you please point me to an example of this is keras?
ragards
I might be wrong but DBN's are gaining quite a traction in pixel level anomaly detection that don't assume traditional background distribution based techniques.
Source: www.mdpi.com/1424-8220/18/3/693/pdf
Fchollet and contributors -- Thank you so much for what you have put together. Keras has significantly helped me.
Recently, Restricted Boltzmann Machines and Deep Belief Networks have been of deep interest to me. I see however, that Keras does not support these.
I know there are resources out there (http://deeplearning.net/tutorial/DBN.html) for DBN's in Theano. However, it would be a absolute dream if Keras could do these.
I know this is all open-source, but I would even be willing to pay someone to help develop DBN's on Keras so we can all use it. Google, Facebook, and Microsoft all use them, and if we could use them, I think our deep learning abilities would be expanded.
Thoughts on this?