Description |
1 online resource (65 pages) |
Summary |
Recent advances in machine learning have lowered the barriers to creating and using ML models. But understanding what these models are doing has only become more difficult. We discuss technological advances with little understanding of how they work and struggle to develop a comfortable intuition for new functionality. In this report, authors Austin Eovito and Marina Danilevsky from IBM focus on how to think about neural network-based language model architectures. They guide you through various models (neural networks, RNN/LSTM, encoder-decoder, attention/transformers) to convey a sense of their abilities without getting entangled in the complex details. The report uses simple examples of how humans approach language in specific applications to explore and compare how different neural network-based language models work. This report will empower you to better understand how machines understand language. Dive deep into the basic task of a language model to predict the next word, and use it as a lens to understand neural network language models Explore encoder-decoder architecture through abstractive text summarization Use machine translation to understand the attention mechanism and transformer architecture Examine the current state of machine language understanding to discern what these language models are good at and their risks and weaknesses |
Notes |
Copyright © O'Reilly Media, Inc |
Issuing Body |
Made available through: Safari, an O'Reilly Media Company |
Notes |
Online resource; Title from title page (viewed October 25, 2021) |
Subject |
Machine learning.
|
|
Machine learning
|
Form |
Electronic book
|
Author |
Danilevsky, Marina, author
|
|
Safari, an O'Reilly Media Company
|
|