Scaling LawsModern deep learning systems often improve when we increase three quantities: model size, dataset size, and compute. This empirical regularity is called a scaling law.
Efficient AI SystemsModern deep learning systems are constrained by compute, memory, bandwidth, latency, and energy. As models become larger, efficiency becomes a central engineering problem rather than a secondary optimization.
Scientific Deep LearningScientific deep learning applies neural networks and differentiable computation to scientific and engineering problems.
Robotics and Embodied AIRobotics and embodied AI study learning systems that act in the physical world.
Open Research ProblemsDeep learning has made large empirical gains, but many scientific and engineering questions remain open.
ConclusionDeep learning systems have progressed from small task-specific models to large multimodal foundation systems capable of perception, language understanding, reasoning, planning, generation, and interaction.
Exercises---
Further ReadingThis chapter covered scaling, efficient systems, scientific AI, robotics, and open research problems. The following books, papers, and resources provide deeper treatment of these areas.