Fork me on GitHub

Trending arXiv

Note: this version is tailored to @Smerity - though you can run your own! Trending arXiv may eventually be extended to multiple users ...

$A^2T$: Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources

Janarthanan Rajendran, Aravind Lakshminarayanan, Mitesh M. Khapra, Prasanna P, Balaraman Ravindran

The ability to transfer knowledge from source tasks to a new target task can be very useful in speeding up a Reinforcement Learning agent. Such transfer has been receiving a lot of attention lately, yet the application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to do selective transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose $A^2T$ (Attend, Adapt and Transfer), an attentive deep architecture for adaptive transfer, which addresses these challenges. $A^2T$ is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that $A^2T$ is an effective architecture for transfer learning by being able to avoid negative transfer while transferring selectively from multiple sources.

Captured tweets and retweets: 1