grammars for games: a gradient-based framework for optimization in deep learning

Clicks: 249
ID: 138538
2016
Deep learning is currently the subject of intensive study. However, fundamental concepts such as representations are not formally defined -- researchers know them when they see them -- and there is no common language for describing and analyzing algorithms. This essay proposes an abstract framework that identifies the essential features of current practice and may provide a foundation for future developments. The backbone of almost all deep learning algorithms is backpropagation, which is simply a gradient computation distributed over a neural network. The main ingredients of the framework are thus, unsurprisingly: (i) game theory, to formalize distributed optimization; and (ii) communication protocols, to track the flow of zeroth and first-order information. The framework allows natural definitions of semantics (as the meaning encoded in functions), representations (as functions whose semantics is chosen to optimized a criterion) and grammars (as communication protocols equipped with first-order convergence guarantees). Much of the essay is spent discussing examples taken from the literature. The ultimate aim is to develop a graphical language for describing the structure of deep learning algorithms that backgrounds the details of the optimization procedure and foregrounds how the components interact. Inspiration is taken from probabilistic graphical models and factor graphs, which capture the essential structural features of multivariate distributions.
Reference Key
ebalduzzi2016frontiersgrammars Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors ;David eBalduzzi
Journal canadian journal of philosophy
Year 2016
DOI 10.3389/frobt.2015.00039
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.