OpenAI announces an unsupervised AI system which learns an excellent representation of sentiment, de

ScienceRocks

Democrat all the way!
Mar 16, 2010
59,455
6,793
1,900
The Good insane United states of America
OpenAI announces an unsupervised AI system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews



We’ve developed an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews.
A linear model using this representation achieves state-of-the-art sentiment analysis accuracy on a small but extensively-studied dataset, the Stanford Sentiment Treebank (we get 91.8% accuracy versus the previous best of 90.2%), and can match the performance of previous supervised systems using 30-100x fewer labeled examples. Our representation also contains a distinct “sentiment neuron” which contains almost all of the sentiment signal.
We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment. We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.


Three things to point out here:

1: This was an unsupervised model coupled with a semi-supervised model. I, and dozens of people many times smarter than myself in computer science and neurology and psychology and whathaveyou, have repeatedly stressed that unsupervised learning is the "truest" form of AI. This is what we're seeing here.
2: This behavior— that is, discovering the concept of sentiment— was not preprogrammed or preplanned. The AI did this entirely on its own.

3: OpenAI specializes in deep reinforcement learning, much like DeepMind. However, it's likely that their approach is much different and looking more for true general intelligence, whereas DeepMind's goal— while stated to be solving intelligence— is more than likely something involving Google's graces. In fact, this goes right back to #1 in that OpenAI is using unsupervised learning models, whereas all of DeepMind's models utilize supervised and semi-supervised learning. In that regard, OpenAI is even further along towards the achievement of AGI than even DeepMind at this point.
 
We're on on way, folks!

darkmatter_blog_lady_robots_cherry_2000.jpg
 
After that anti-science and anti-education asshole was elected last Nov

Tell me ... what were Obama's GREAT scientific achievements ... other than scrapping the Constellation program and turning NASA into a publicly-funded version of Greenpeace?
 
OpenAI announces an unsupervised AI system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews



We’ve developed an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews.
A linear model using this representation achieves state-of-the-art sentiment analysis accuracy on a small but extensively-studied dataset, the Stanford Sentiment Treebank (we get 91.8% accuracy versus the previous best of 90.2%), and can match the performance of previous supervised systems using 30-100x fewer labeled examples. Our representation also contains a distinct “sentiment neuron” which contains almost all of the sentiment signal.
We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment. We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.


Three things to point out here:

1: This was an unsupervised model coupled with a semi-supervised model. I, and dozens of people many times smarter than myself in computer science and neurology and psychology and whathaveyou, have repeatedly stressed that unsupervised learning is the "truest" form of AI. This is what we're seeing here.
2: This behavior— that is, discovering the concept of sentiment— was not preprogrammed or preplanned. The AI did this entirely on its own.

3: OpenAI specializes in deep reinforcement learning, much like DeepMind. However, it's likely that their approach is much different and looking more for true general intelligence, whereas DeepMind's goal— while stated to be solving intelligence— is more than likely something involving Google's graces. In fact, this goes right back to #1 in that OpenAI is using unsupervised learning models, whereas all of DeepMind's models utilize supervised and semi-supervised learning. In that regard, OpenAI is even further along towards the achievement of AGI than even DeepMind at this point.

Is open AI Elon musks brainchild or am I getting this confused with something else?
 

Forum List

Back
Top