One of the most important indicators of intelligence is undoubtedly to acquire and improve language skills. Until today, we were talking about language skills, despite being limited, from chatbots to artificial intelligence-supported customer service. However, we are entering a new dimension called GPT-3. The launch of the GPT-3 by OpenAI, a startup from San Francisco, is seen by many authorities as one of the milestones in the Artificial Intelligence (AI) world. Its wide range of uses are at a level that will excite everyone who sees the experiments.
The topic of OpenAI GPT-3 has triggered numerous discussions. In this article we would like to get to know and examine this novel development in three focal points: definitions, experiments and the significance of GPT-3. The first section will explore what OpenAI, GPT-3 is and also explain how GPT-3 works. Following section will demonstrate the experiments conducted by developers using GPT-3; such as writing aphorisms, creating search engines, generating codes and web designs. Last but not least we will discuss the importance of GPT-3 and its future contributions.
OpenAI
OpenAI is a non-profit research company that aims to develop and direct the course of artificial intelligence (AI) in a way that benefits humanity. The company was founded in 2015 by Elon Musk (who later left the board of OpenAI in 2018) and Sam Altman based in in San Francisco, California, has investors like Microsoft and Linkedin co-founder and Microsoft board member Reid Hoffman’s charitable foundation
OpenAI's first GPT model was mentioned in the article "Improving Language Understanding by Generative Pre-training" by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
What is GPT-3
GPT-3 is a language model powered by Artificial Intelligence, generally defined as a system that can calculate how a possible sentence can be established in real life. The language model reaches a conclusion by evaluating sentences both structurally and semantically. It is possible to make these evaluations with a large amount of data feed. Analyzing this data using Deep Learning and different Machine Learning algorithms, the system begins to make “humanoid” sentences in direct proportion to the size of the data it feeds and the system it uses. In short, GPT-3 is an artificial intelligence supported language model that analyzes data with 2 different learning algorithms and can evaluate sentences structurally and semantically.
Method Used
Methods used refer to the resources used in creating the GPT-3 language model, what kind of evaluations have been made and the number of parameters in its content.
The model is constructed using the basic concept of Transformer, Attention, etc, for pre-training a dataset composed of Common Crawl, Wikipedia, WebText and some additional data sources.
The model was evaluated against various NLP* benchmarks, achieved state-of-the-art performance on question answering tasks, and closed-books that set a new record for language modelling.
The researchers trained an array of smaller models, varying from 125 million parameters to 13 billion parameters for comparing their efficiency counter to GPT-3 on the three settings.
The following graph displays the profits in terms of accuracy for various zero, one and few shots as a function of number of model parameters, it can be observed that huge gains are obtained due to size-scaled up.
Use Cases
In this section, we take a look at the sample projects created using the GPT-3 language model.
Experiment 1
AI developers have already found surprising applications, such as using GPT-3 to write code.
Sharif Shameem, for example, wrote a layout generator where you describe in plain text what you want, and the model generates the appropriate code.
Experiment 2
A user named Paras Chopra made a fully functional search engine using GPT-3. For any query, it returns the full response and corresponding URL.
Experiment 3
AI developers named Jordan Singer wrote a plugin for Figma, a user interface design application. This plugin allows you to design using GPT-3 artificial intelligence.
Experiment 4
AI developers who had the opportunity to try out the GPT-3 API, developed various projects to test GPT-3. You can visit here to see one of the projects developed which creates automated aphorisms for you.
Why is GPT-3 Important?
People still wonder why the GPT-3 is one of the most exciting news these days, the answer is that it is the biggest model ever trained. 175 billion learning parameters are 10 times greater than previous language models. For all missions, GPT-3 is implemented without any gradient updates or tweaks. It requires only a few shot representations through textual interaction with the model. When thoughts are expressed in a few words, GPT-3 can understand the rest of the sentence itself and respond according to the request. This huge advancement in deep learning and natural language processing pave the way for finding solutions to many problems. Flooding the internet with fake news, for example. This was a key concern with GPT-2 as well, but this newest iteration would make mass producing content even easier. But without early experimentation, many issues could sneak by unnoticed. Bias and fake news are problems we can easily predict, but what about the stuff we can’t?
As soon as problems pop up, they can be tackled. And, as OpenAI is only giving people access via an API, anything problematic can be shut down. “They’re acting as a middleman, so if people do start using it maliciously on a mass scale, they would have the ability to detect that and shut it down,” says beta tester Harley Turan, “which is a lot safer than the approach they took with GPT-2.” As well as enforcing its own terms of service, OpenAI says it is working to “develop tools to label and intervene on manifestations of harmful bias,” plus conducting its own research and working with academics to determine potential misuse.
Of course, the GPT-3 is still a very naive system and this is what we see in only a few months of work. However, the ability to produce this number of samples in such a short time is like a short promotional film about what the technology can offer us in the coming years. And indeed, the direction in which technology can go seems completely limited by the imagination for now.
"Artificial intelligence programs lack awareness and self- awareness. They will never have a sense of humor. They will never be able to appreciate art, beauty or love. They will never feel a feeling of loneliness. They will not be able to empathize with other people, animals, or the environment. They will never be able to enjoy the music, fall in love or cry. No matter how good our computers are at winning games like Go or Chess, humans are intellectually superior to computers. We do not live by the rules of these games. Our minds are much, much bigger than that.’’
By - GPT-3
*NLP = Natural language processing (NLP), in brief, is an interdisciplinary domain which includes computer science, artificial intelligence and computational linguistics, accounting for the communication between computers and human (natural) languages, and, in context of efficiently processing enormous natural language corpora.
Author: Hasret Özer
Editor: Lalin Keyvan
Advisor: Burcu Doğru
#GarageAtlas #GPT3 #OpenAI #artificialintelligence #deeplearning #autonomous #futuretechnology @OpenAI
Kommentarer