was important and at the end the reality of that entelechy: the people, was overwhelming. Thank you Kim Bartley and Donnacha O´Briain.
! Example 2: This movie should have NEVER been made. From the poorly done animation, to the beyond bad acting. I am not sure at what point the people behind this movie said "Ok, looks good! Lets do it!" I was in awe of how truly horrid this movie was. At one point, which very may well have been the WORST point, a computer generated Saber Tooth of gold falls from the roof stabbing the idiot creator of the cats in the mouth...uh, ooookkkk. The villain of the movie was a paralyzed sabretooth that was killed within minutes of its ﬁrst appearance. The other two manages to kill a handful of people prior to being burned and gunned down. Then, there is a random one awaiting victims in the jungle...which scares me for one sole reason. Will there be a Part Two? God, for the sake of humans everywhere I hope not.
This movie was pure garbage. From the power point esquire credits to the slide show ending. Sentiment analysis on IMDB reviews ! 50,000 training; 50,000 test Results for IMDB Sentiment Classiﬁcation (long paragraphs) Method Error rate Bag of words 12.2% Bag of words + idf 11.8% LDA 32.6% LSA 16.1% Average word vectors 18% Bag of words + word vectors 11.7% Bag of words + word vectors + more tweaks 11.1% Bag of words + bigrams + Naive Bayes SVM 9% Paragraph vectors 7.5% Important side note: “Paragraph vectors” can be computed for things that are not paragraphs. In particular: ! sentences whole documents users products movies audio waveforms … Paragraph Vectors: Train on Wikipedia articles Nearest neighbor articles to article for “Machine Learning” Wikipedia Article Paragraph Vectors visualized via t-SNE Wikipedia Article Paragraph Vectors visualized via t-SNE Example of LSTM-based representation: Machine Translation Input: “Cogito ergo sum” Output: “I think, therefore I am!” Big vector Source Language: A B C Target Language: W X Y Z LSTM for End to End Translation sentence rep See: Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, and Quoc Le. http://arxiv.org/abs/ 1409.3215. To appear in NIPS, 2014. Example Translation • Google Translate: As Reuters noted for the ﬁrst time in July, the seating conﬁguration is exactly what fuels the battle between the latest devices. • Neural LSTM model: As Reuters reported for the ﬁrst time in July, the conﬁguration of seats is exactly what drives the battle between the latest aircraft. • Human translation: As Reuters ﬁrst reported in July, seat layout is exactly what drives the battle between the latest jets. sentence rep PCA LSTM for End to End Translation linearly separable wrt subject vs object sentence rep PCA mostly invariant to paraphasing LSTM for End to End Translation Combining modalities e.g. vision and language Generating Image Captions from Pixels Human: A young girl asleep on the sofa cuddling a stuffed bear. Model sample 1: A close up of a child holding a stuffed animal. Model sample 2: A baby is asleep next to a teddy bear. Work in progress by Oriol Vinyals et al. Generating Image Captions from Pixels Human: Three different types of pizza on top of a stove. Model sample 1: Two pizzas sitting on top of a stove top oven. Model sample 2: A pizza sitting on top of a pan on top of a stove. Generating Image Captions from Pixels Human: A green monster kite soaring in a sunny sky. Model: A man ﬂying through the air while riding a skateboard. Generating Image Captions from Pixels Human: A tennis player getting ready to serve the ball. Model: A man holding a tennis racquet on a tennis court. Conclusions • Deep neural networks are very effective for wide range of tasks • By using parallelism, we can quickly train very large and effective deep neural models on very large datasets • Automatically build high-level representations to solve desired tasks • By using embeddings, can work with sparse data • Effective in many domains: speech, vision, language modeling, user prediction, language understanding, translation, advertising, … An important tool in building intelligent systems. Joint work with many collaborators! Further reading: • Le, Ranzato, Monga, Devin, Chen, Corrado, Dean, & Ng. Building High-Level Features Using Large Scale Unsupervised Learning, ICML 2012. • Dean, Corrado, et al. , Large Scale Distributed Deep Networks, NIPS 2012. • Mikolov, Chen, Corrado and Dean. Efﬁcient Estimation of Word Representations in Vector Space, http://arxiv.org/abs/1301.3781. • Distributed Representations of Sentences and Documents, by Quoc Le and Tomas Mikolov, ICML 2014, http://arxiv.org/abs/1405.4053 • Vanhoucke, Devin and Heigold. Deep Neural Networks for Acoustic Modeling, ICASSP 2013. • Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, and Quoc Le. http://arxiv.org/abs/1409.3215. To appear in NIPS, 2014. • http://research.google.com/papers • http://research.google.com/people/jeff