大规模深度学习


Joint work with many colleagues at Google Large Scale Deep Learning Jeff Dean How Can We Build More Intelligent Computer Systems? Need to perceive and understand the world Basic speech and vision capabilities Language understanding User behavior prediction … How can we do this? • Cannot write algorithms for each task we want to accomplish separately • Need to write general algorithms that learn from observations Can we build systems that: • Generate understanding from raw data • Solve difficult problems to improve Google’s products • Minimize software engineering effort • Advance state of the art in what is possible Plenty of Data • Text: trillions of words of English + other languages • Visual: billions of images and videos • Audio: thousands of hours of speech per day • User activity: queries, result page clicks, map requests, etc. • Knowledge graph: billions of labelled relation triples • ... Image Models What are these numbers? What are all these words? How about these words? Textual understanding “This movie should have NEVER been made. From the poorly done animation, to the beyond bad acting. I am not sure at what point the people behind this movie said "Ok, looks good! Lets do it!" I was in awe of how truly horrid this movie was.” General Machine Learning Approaches • Learning by labeled example: supervised learning • e.g. An email spam detector • amazingly effective if you have lots of examples ! • Discovering patterns: unsupervised learning • e.g. data clustering • difficult in practice, but useful if you lack labeled examples ! • Feedback right/wrong: reinforcement learning • e.g. learning to play chess by winning or losing • works well in some domains, becoming more important Machine Learning • For many of these problems, we have lots of data ! • Want techniques that minimize software engineering effort • simple algorithms, teach computer how to learn from data • don’t spend time hand-engineering algorithms or high- level features from the raw data What is Deep Learning? of visual re-representations, from V1 to V2 to V4 to IT cortex (Figure 2). Beginning with the studies of Gross [27], a wealth of work has shown that single neurons at the highest level of the monkey ventral visual stream – the IT cortex – display spiking responses that are probably useful for object recognition. Specifically, many individual IT neurons respond selectively to particular classes of objects, such as faces or other complex shapes, yet show some tolerance to changes in object position, size, pose and illumination, and low-level shape cues. (Also see e.g. Ref. [28] for recent related results in humans.) How can the responses of individual ventral stream neurons provide insight into object manifold untangling in the brain? To approach this, we have focused on char- acterizing the initial wave of neuronal population ‘images’ that are successively produced along the ventral visual str- eam as the retinal image is transformed and re-represented on its way to the IT cortex (Figure 2). For example, we and our collaborators recently found that simple linear classi- fiers can rapidly (within <300 ms of image onset) and accurately decide the category of an object from the firing rates of an IT population of 200 neurons, despite variation in object position and size [19]. It is important to note that using ‘stronger’ (e.g. non-linear) classifiers did not substan- tially improve recognition performance and the same classifiers fail when applied to a simulated V1 population ofequalsize[19]. Thisshowsthatperformanceisnotaresult of the classifiers themselves, but the powerful form of visual representation conveyed by the IT cortex. Thus, compared with early visual representations, object manifolds are less tangled in the IT population representation. To show this untangling graphically, Figure 3 illustrates the manifolds of the faces of Sam and Joe from Figure 1d (retina-like representation) re-represented in the V1 and IT cortical population spaces. To generate these, we took popu- lations of simulated V1-like response functions (e.g. Refs [29,30]) and IT-like response functions (e.g. Refs [31,32]), and applied them to all the images of Joe and Sam. This reveals that the V1 representation, like the retinal representation, still contains highly curved, tangled object manifolds (Figure 3a), whereas the same object manifolds are flattened and untangled in the IT representation (Figure 3b). Thus, from the point of view of downstream decision neurons, the retinal and V1 representations are not in a good format to separate Joe from the rest of the world, whereas the IT representation is. In sum, the experimental evidence suggests that the ventral stream transformation (culminating in IT) solves object recognition by untangling object manifolds. Foreach visual image striking the eye, this total transformation happens progressively (i.e. stepwise Figure 2. Neuronal populations along the ventral visual processing stream. The rhesus monkey is currently our best model of the human visual system. Like humans, monkeys have high visual acuity, rely heavily on vision (50% of macaque neocortex is devoted to vision) and easily perform visual recognition tasks. Moreover, the monkey visual areas have been mapped and are hierarchically organized [26], and the ventral visual stream is known to be critical for complex object discrimination (colored areas, see text). We show a lateral schematic of a rhesus monkey brain (adapted from Ref. [26]). We conceptualize each stage of the ventral stream as a new population representation. The lower panels schematically illustrate these populations in early visual areas and at successively higher stages along the ventral visual stream – their relative size loosely reflects their relative output dimensionality (approximate number of feed-forward projection neurons). A given pattern of photons from the world (here, a face) is transduced into neuronal activity at the retina and is progressively and rapidly transformed and re-represented in each population, perhaps by a common transformation (T). Solid arrows indicate the direction of visual information flow based on neuronal latency (100 ms latency in IT), but this does not preclude fast feedback both within and between areas (dashed arrows, see Box 1). The gray arrows across the bottom indicate the population representations for the retina, V1 and IT, which are considered in Figures 1d and 3a,b, respectively. RGC, retinal ganglion cells; LGN, lateral geniculate nucleus. Opinion TRENDS in Cognitive Sciences Vol.11 No.8 337 www.sciencedirect.com DiCarlo & Cox, TICS, 2006 A working hypothesis: The ventral stream “untangles” objects “cat” • The modern reincarnation of Artificial Neural Networks from the 1980s and 90s. • A collection of simple trainable mathematical units, which collaborate to compute a complicated function. • Compatible with supervised, unsupervised, and reinforcement learning. What is Deep Learning? of visual re-representations, from V1 to V2 to V4 to IT cortex (Figure 2). Beginning with the studies of Gross [27], a wealth of work has shown that single neurons at the highest level of the monkey ventral visual stream – the IT cortex – display spiking responses that are probably useful for object recognition. Specifically, many individual IT neurons respond selectively to particular classes of objects, such as faces or other complex shapes, yet show some tolerance to changes in object position, size, pose and illumination, and low-level shape cues. (Also see e.g. Ref. [28] for recent related results in humans.) How can the responses of individual ventral stream neurons provide insight into object manifold untangling in the brain? To approach this, we have focused on char- acterizing the initial wave of neuronal population ‘images’ that are successively produced along the ventral visual str- eam as the retinal image is transformed and re-represented on its way to the IT cortex (Figure 2). For example, we and our collaborators recently found that simple linear classi- fiers can rapidly (within <300 ms of image onset) and accurately decide the category of an object from the firing rates of an IT population of 200 neurons, despite variation in object position and size [19]. It is important to note that using ‘stronger’ (e.g. non-linear) classifiers did not substan- tially improve recognition performance and the same classifiers fail when applied to a simulated V1 population ofequalsize[19]. Thisshowsthatperformanceisnotaresult of the classifiers themselves, but the powerful form of visual representation conveyed by the IT cortex. Thus, compared with early visual representations, object manifolds are less tangled in the IT population representation. To show this untangling graphically, Figure 3 illustrates the manifolds of the faces of Sam and Joe from Figure 1d (retina-like representation) re-represented in the V1 and IT cortical population spaces. To generate these, we took popu- lations of simulated V1-like response functions (e.g. Refs [29,30]) and IT-like response functions (e.g. Refs [31,32]), and applied them to all the images of Joe and Sam. This reveals that the V1 representation, like the retinal representation, still contains highly curved, tangled object manifolds (Figure 3a), whereas the same object manifolds are flattened and untangled in the IT representation (Figure 3b). Thus, from the point of view of downstream decision neurons, the retinal and V1 representations are not in a good format to separate Joe from the rest of the world, whereas the IT representation is. In sum, the experimental evidence suggests that the ventral stream transformation (culminating in IT) solves object recognition by untangling object manifolds. Foreach visual image striking the eye, this total transformation happens progressively (i.e. stepwise Figure 2. Neuronal populations along the ventral visual processing stream. The rhesus monkey is currently our best model of the human visual system. Like humans, monkeys have high visual acuity, rely heavily on vision (50% of macaque neocortex is devoted to vision) and easily perform visual recognition tasks. Moreover, the monkey visual areas have been mapped and are hierarchically organized [26], and the ventral visual stream is known to be critical for complex object discrimination (colored areas, see text). We show a lateral schematic of a rhesus monkey brain (adapted from Ref. [26]). We conceptualize each stage of the ventral stream as a new population representation. The lower panels schematically illustrate these populations in early visual areas and at successively higher stages along the ventral visual stream – their relative size loosely reflects their relative output dimensionality (approximate number of feed-forward projection neurons). A given pattern of photons from the world (here, a face) is transduced into neuronal activity at the retina and is progressively and rapidly transformed and re-represented in each population, perhaps by a common transformation (T). Solid arrows indicate the direction of visual information flow based on neuronal latency (100 ms latency in IT), but this does not preclude fast feedback both within and between areas (dashed arrows, see Box 1). The gray arrows across the bottom indicate the population representations for the retina, V1 and IT, which are considered in Figures 1d and 3a,b, respectively. RGC, retinal ganglion cells; LGN, lateral geniculate nucleus. Opinion TRENDS in Cognitive Sciences Vol.11 No.8 337 www.sciencedirect.com DiCarlo & Cox, TICS, 2006 A working hypothesis: The ventral stream “untangles” objects “cat” • Loosely inspired by what (little) we know about
 the biological brain. • Higher layers form higher levels of abstraction Neural Networks • Learn a complicated function from data space 1 space 2 The Neuron yi = F Xi wixi ! x1 x2 x3 w1 w2 w3 F(x) = max(0,x) • Different weights compute different functions Neural Networks • Simple compositions of neurons• Different weights compute different functions Neural Networks • Simple compositions of neurons Input: Output: Neural Networks • Simple compositions of neurons Input: Output: Neural Networks Input: Output: Neural Networks Input: Output: Learning Algorithm • while not done • pick a random training case (x, y) • run neural network on input x • modify connections to make prediction closer to y Learning Algorithm • while not done • pick a random training case (x, y) • run neural network on input x • modify connection weights to make prediction closer to y How to modify connections? • Follow the gradient of the error w.r.t. the connections Gradient points in direction of improvement What can neural nets compute? • Human perception is very fast (0.1 second) • Recognize objects (“see”) • Recognize speech (“hear”) • Recognize emotion • Instantly see how to solve some problems • And many more! Why do neural networks work? Why do neural networks work? see image catclick if cat 0.1 sec: neurons fire only 10 times! Why do neural networks work? • Anything humans can do in 0.1 sec, the right big 10-layer network can do too Functions Artificial Neural Nets Can Learn Input Output Pixels: “ear” Audio: “sh ang hai res taur aun ts” P(doc1 preferred over doc2) “Hello, how are you?” “Bonjour, comment allez-vous?” Research Objective: Minimizing Time to Results • We want results of experiments quickly • “Patience threshold”: No one wants to wait more than a few days or a week for a result • Significantly affects scale of problems that can be tackled • We sometimes optimize for experiment turnaround time, rather than absolute minimal system resources for performing the experiment Train in a day what takes a single GPU card 6 weeks How Can We Train Big Nets Quickly? • Exploit many kinds of parallelism ! • Model parallelism • Data parallelism Input data Layer 1 Layer N Representation ... Input data Layer 1 Layer N Representation (Sometimes) Local Receptive Fields ... Layer 1 Layer 0 Layer N Partition 1 Partition 3Partition 2 Partition 1 Partition 3Partition 2 ... Model Parallelism: Partition model across machines Layer 1 Layer 0 Layer N Partition 1 Partition 3Partition 2 Partition 1 Partition 3Partition 2 Minimal network traffic: The most densely connected areas are on the same partition One replica of our biggest model: 144 machines, ~2300 cores ... Model Parallelism: Partition model across machines p Model Data ∆p p’ p’ = p + ∆p Data Parallelism: Asynchronous Distributed Stochastic Gradient Descent Parameter Server ∆p’ p’’ = p’ + ∆p’ Parameter Server Model Workers Data Shards p’ = p + ∆p ∆p p’ Data Parallelism: Asynchronous Distributed Stochastic Gradient Descent Applications Acoustic Modeling for Speech Recognition Trained in <5 days on cluster of 800 machines 30% reduction in Word Error Rate for English (“biggest single improvement in 20 years of speech research”) Launched in 2012 at time of Jellybean release of Android Close collaboration with Google Speech team label Fully-connected layers Input Layer 1 Layer 7 ... Softmax to predict object class Convolutional layers (same weights used at all spatial locations in layer) ! Convolutional networks developed by Yann LeCun (NYU) Basic architecture developed by Krizhevsky, Sutskever & Hinton (all now at Google). Won 2012 ImageNet challenge with 16.4% top-5 error rate 2012-era Convolutional Model for Object Recognition 24 layers deep! 2014-era Model for Object Recognition Developed by team of Google Researchers: Won 2014 ImageNet challenge with 6.66% top-5 error rate Module with 6 separate! convolutional layers Good Fine-grained Classification “hibiscus” “dahlia” Good Generalization Both recognized as a “meal” Sensible Errors “snake” “dog” Works in practice for real users. Works in practice for real users. What about domains with sparse input data? Deep neural networks have proven themselves across a range of supervised learning tasks involve dense input features. ~1000-D joint embedding space dolphin SeaWorld Paris Answer: Embeddings porpoise Camera How can DNNs possibly deal with sparse data? EEmbedding function Deep neural network Raw sparse inputs Floating-point vectors Prediction (classification or regression) How Can We Learn the Embeddings? features ESingle embedding function Hierarchical softmax classifier Raw sparse features Obama’s How Can We Learn the Embeddings? nearby word Skipgram Text Model meeting with Putin Mikolov, Chen, Corrado and Dean. Efficient Estimation of Word Representations in Vector Space, http://arxiv.org/abs/1301.3781. source word nearby words embedding! vector upper layers Nearest neighbors in language embeddings space are closely related semantically. tiger shark! ! bull shark! blacktip shark! shark! oceanic whitetip shark! sandbar shark! dusky shark! blue shark! requiem shark! great white shark! lemon shark car! ! cars! muscle car! sports car! compact car! autocar! automobile! pickup truck! racing car! passenger car ! dealership new york! ! new york city! brooklyn! long island! syracuse! manhattan! washington! bronx! yonkers! poughkeepsie! new york state • Trained skip-gram model on Wikipedia corpus. E * 5.7M docs, 5.4B terms, 155K unique terms, 500-D embeddings Solving Analogies • Embedding vectors trained for the language modeling task have very interesting properties (especially the skip-gram model). ! E(hotter) - E(hot) ≈ E(bigger) - E(big) ! E(Rome) - E(Italy) ≈ E(Berlin) - E(Germany) ! E(hotter) - E(hot) + E(big) ≈ E(bigger) ! E(Rome) - E(Italy) + E(Germany) ≈ E(Berlin) Solving Analogies • Embedding vectors trained for the language modeling task have very interesting properties (especially the skip-gram model). Skip-gram model w/ 640 dimensions trained on 6B words of news text achieves 57% accuracy for analogy-solving test set. Visualizing the Embedding Space Embeddings are Powerful fallen draw fell drawn drew taketaken took give given gave fall Embeddings seem useful. What about longer pieces of text? Can We Embed Longer Pieces of Text? • Query similarity / Query-Document scoring • Machine translation • Question answering • Natural language understanding? Roppongi weather Is it raining in Tokyo? Record temps in Japan’s capital Bag of Words: Avg of embeddings word word word word word sentence rep word word word word word sentence rep Topic Model: Paragraph vectors word word word word word sentence rep Sequential: RNN / LSTM Embeddings for long chunks of text. Paragraph Vectors: Word vectors similar_wordword Paragraph Vectors similar_docdoc the quick brown fox jumped Ew Ew Concatenate Hierarchical softmax classifier Simple Language Model Ew Ew the quick brown fox jumped Ew Ew Paragraph embedding matrix Concatenate Hierarchical softmax classifier Paragraph Vector Model training! paragraph! id Ep Ew Ew Details in Distributed Representations of Sentences and Documents, by Quoc Le and Tomas Mikolov, ICML 2014, http://arxiv.org/abs/1405.4053 Ep is a matrix of dimension ||# training paragraphs|| x dEp At inference time, for a new paragraph, hold rest of model fixed and run gradient descent on words in paragraph to obtain representation for the paragraph Paragraph vector captures the complementary, non-local information that is best able to predict the next word Text Classification Example 1: I had no idea of the facts this film presents. As I remember this situation I accepted the information presented then in the media: a confused happening around a dubious personality: Mr. Chavez. The film is a revelation of many realities, I wonder if something of this caliber has ever been made. I supposed the protagonist was Mr.Chavez but everyone coming up on picture

was important and at the end the reality of that entelechy: the people, was overwhelming. Thank you Kim Bartley and Donnacha O´Briain.

! Example 2: This movie should have NEVER been made. From the poorly done animation, to the beyond bad acting. I am not sure at what point the people behind this movie said "Ok, looks good! Lets do it!" I was in awe of how truly horrid this movie was. At one point, which very may well have been the WORST point, a computer generated Saber Tooth of gold falls from the roof stabbing the idiot creator of the cats in the mouth...uh, ooookkkk. The villain of the movie was a paralyzed sabretooth that was killed within minutes of its first appearance. The other two manages to kill a handful of people prior to being burned and gunned down. Then, there is a random one awaiting victims in the jungle...which scares me for one sole reason. Will there be a Part Two? God, for the sake of humans everywhere I hope not.

This movie was pure garbage. From the power point esquire credits to the slide show ending. Sentiment analysis on IMDB reviews ! 50,000 training; 50,000 test Results for IMDB Sentiment Classification (long paragraphs) Method Error rate Bag of words 12.2% Bag of words + idf 11.8% LDA 32.6% LSA 16.1% Average word vectors 18% Bag of words + word vectors 11.7% Bag of words + word vectors + more tweaks 11.1% Bag of words + bigrams + Naive Bayes SVM 9% Paragraph vectors 7.5% Important side note: “Paragraph vectors” can be computed for things that are not paragraphs. In particular: ! sentences whole documents users products movies audio waveforms … Paragraph Vectors: Train on Wikipedia articles Nearest neighbor articles to article for “Machine Learning” Wikipedia Article Paragraph Vectors visualized via t-SNE Wikipedia Article Paragraph Vectors visualized via t-SNE Example of LSTM-based representation: Machine Translation Input: “Cogito ergo sum” Output: “I think, therefore I am!” Big vector Source Language: A B C Target Language: W X Y Z LSTM for End to End Translation sentence rep See: Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, and Quoc Le. http://arxiv.org/abs/ 1409.3215. To appear in NIPS, 2014. Example Translation • Google Translate: As Reuters noted for the first time in July, the seating configuration is exactly what fuels the battle between the latest devices. • Neural LSTM model: As Reuters reported for the first time in July, the configuration of seats is exactly what drives the battle between the latest aircraft. • Human translation: As Reuters first reported in July, seat layout is exactly what drives the battle between the latest jets. sentence rep PCA LSTM for End to End Translation linearly separable wrt subject vs object sentence rep PCA mostly invariant to paraphasing LSTM for End to End Translation Combining modalities e.g. vision and language Generating Image Captions from Pixels Human: A young girl asleep on the sofa cuddling a stuffed bear. Model sample 1: A close up of a child holding a stuffed animal. Model sample 2: A baby is asleep next to a teddy bear. Work in progress by Oriol Vinyals et al. Generating Image Captions from Pixels Human: Three different types of pizza on top of a stove. Model sample 1: Two pizzas sitting on top of a stove top oven. Model sample 2: A pizza sitting on top of a pan on top of a stove. Generating Image Captions from Pixels Human: A green monster kite soaring in a sunny sky. Model: A man flying through the air while riding a skateboard. Generating Image Captions from Pixels Human: A tennis player getting ready to serve the ball. Model: A man holding a tennis racquet on a tennis court. Conclusions • Deep neural networks are very effective for wide range of tasks • By using parallelism, we can quickly train very large and effective deep neural models on very large datasets • Automatically build high-level representations to solve desired tasks • By using embeddings, can work with sparse data • Effective in many domains: speech, vision, language modeling, user prediction, language understanding, translation, advertising, … An important tool in building intelligent systems. Joint work with many collaborators! Further reading: • Le, Ranzato, Monga, Devin, Chen, Corrado, Dean, & Ng. Building High-Level Features Using Large Scale Unsupervised Learning, ICML 2012. • Dean, Corrado, et al. , Large Scale Distributed Deep Networks, NIPS 2012. • Mikolov, Chen, Corrado and Dean. Efficient Estimation of Word Representations in Vector Space, http://arxiv.org/abs/1301.3781. • Distributed Representations of Sentences and Documents, by Quoc Le and Tomas Mikolov, ICML 2014, http://arxiv.org/abs/1405.4053 • Vanhoucke, Devin and Heigold. Deep Neural Networks for Acoustic Modeling, ICASSP 2013. • Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, and Quoc Le. http://arxiv.org/abs/1409.3215. To appear in NIPS, 2014. • http://research.google.com/papers • http://research.google.com/people/jeff
还剩79页未读

继续阅读

下载pdf到电脑,查找使用更方便

pdf的实际排版效果,会与网站的显示效果略有不同!!

需要 8 金币 [ 分享pdf获得金币 ] 3 人已下载

下载pdf

pdf贡献者

iikei8

贡献于2014-12-12

下载需要 8 金币 [金币充值 ]
亲,您也可以通过 分享原创pdf 来获得金币奖励!
下载pdf