Natural Language Processing with Sequence Models

Start Date: 07/05/2020

Course Type: Common Course

Course Link: https://www.coursera.org/learn/sequence-models-in-nlp

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

About Course

In Course 3 of the Natural Language Processing Specialization, offered by deeplearning.ai, you will:

Course Syllabus

Module 1

Deep Learning Specialization on Coursera

Course Introduction

In Course 3 of the Natural Language Processing Specialization, offered by deeplearning.ai, you will:

Related Wiki Topic

Article Example
Natural language processing Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages and, in particular, concerned with programming computers to fruitfully process large natural language corpora. Challenges in natural language processing frequently involve natural language understanding, natural language generation (frequently from formal, machine-readable logical forms), connecting language and machine perception, managing human-computer dialog systems, or some combination thereof.
History of natural language processing The history of natural language processing describes the advances of natural language processing (Outline of natural language processing). There is some overlap with the history of machine translation and the history of artificial intelligence.
Outline of natural language processing The following natural language processing toolkits are popular collections of natural language processing software. They are suites of libraries, frameworks, and applications for symbolic, statistical natural language and speech processing.
Outline of natural language processing The following technologies make natural language processing possible:
Natural language processing in the late 1980s and mid 1990s, much Natural Language Processing research has relied heavily on machine learning.
Natural language generation As in other areas of natural language processing, this can be done using either explicit models of language (e.g., grammars) and the domain, or using statistical models derived by analysing human-written texts.
Outline of natural language processing Natural language processing can be described as all of the following:
Studies in Natural Language Processing Studies in Natural Language Processing is the book series of the
Outline of natural language processing The following outline is provided as an overview of and topical guide to natural language processing:
Outline of natural language processing Natural language processing – computer activity in which computers are entailed to analyze, understand, alter, or generate natural language. This includes the automation of any or all linguistic forms, activities, or methods of communication, such as conversation, correspondence, reading, written composition, dictation, publishing, translation, lip reading, and so on. Natural language processing is also the name of the branch of computer science, artificial intelligence, and linguistics concerned with enabling computers to engage in communication using natural language(s) in all forms, including but not limited to speech, print, writing, and signing.
Natural language generation Natural language generation (NLG) is the natural language processing task of generating natural language from a machine representation system such as a knowledge base or a logical form. Psycholinguists prefer the term language production when such formal representations are interpreted as models for mental representations.
Outline of natural language processing Natural language processing contributes to, and makes use of (the theories, tools, and methodologies from), the following fields:
Natural language understanding Natural language understanding (NLU) is a subtopic of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU is considered an AI-hard problem.
Natural language processing Formerly, many language-processing tasks typically involved the direct hand coding of rules, which is not in general robust to natural language variation. The machine-learning paradigm calls instead for using statistical inference to automatically learn such rules through the analysis of large "corpora" of typical real-world examples (a "corpus" (plural, "corpora") is a set of documents, possibly with human or computer annotations).
Empirical Methods in Natural Language Processing Empirical Methods in Natural Language Processing or EMNLP is a leading conference in the area of Natural Language Processing. EMNLP is organized by the ACM special interest group on linguistic data (SIGDAT).
Natural language All language varieties of world languages are natural languages, although some varieties are subject to greater degrees of published prescriptivism and/or language regulation than others. Thus nonstandard dialects can be viewed as a wild type in comparison with standard languages. But even an official language with a regulating academy, such as Standard French with the French Academy, is classified as a natural language (for example, in the field of natural language processing), as its prescriptive points do not make it either constructed enough to be classified as a constructed language or controlled enough to be classified as a controlled natural language.
Triphone In linguistics, a triphone is a sequence of three phonemes. Triphones are useful in models of natural language processing where they are used to establish the various contexts in which a phoneme can occur in a particular natural language.
Natural language user interface Natural language interfaces are an active area of study in the field of natural language processing and computational linguistics. An intuitive general natural language interface is one of the active goals of the Semantic Web.
Natural language processing Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing. This was due to both the steady increase in computational power (see Moore's law) and the gradual lessening of the dominance of Chomskyan theories of linguistics (e.g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing. Some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. However, part-of-speech tagging introduced the use of hidden Markov models to NLP, and increasingly, research has focused on statistical models, which make soft, probabilistic decisions based on attaching real-valued weights to the features making up the input data. The cache language models upon which many speech recognition systems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks.
Natural language user interface Siri is an intelligent personal assistant application integrated with operating system iOS. The application uses natural language processing to answer questions and make recommendations.