Smart phones can respond appropriately to spoken commands, but not if there are too many other voices or other background sounds present. This highlights the fact that focusing on important sounds in a noisy environment is a computationally challenging problem, in some ways more daunting than voice recognition, despite the fact that most of us can do it effortlessly. In order to study the cortical mechanisms at work, we have developed a rodent model of the "cocktail problem," which requires the animal to respond to one of two simultaneously presented sounds on some behavioral trials and the other sound on others. We find that neurons in the prefrontal cortex (PFC) exhibit an anticipatory effect --- even before the stimulus is presented PFC neurons are more or less active depending on which of the two sounds the animal is planning to select. Surprisingly, we also see this anticipatory effect in the primary auditory cortex (A1). We have also developed a working memory task for rats, and we find that PFC neurons can encode the most recent action taken by the animal, the next action to be taken, as well as switches from one action to another. Time permitting, I will present some of our theoretical results demonstrating that a sparse representation of natural scenes can be learned by a biologically plausible network relying only on synaptically local plasticity rules. This model makes several predictions, including lognormal distributions for firing rates and synaptic strengths, as well as higher firing rates, but smaller numbers, for inhibitory vs. excitatory neurons. Finally, we have found that the principle of sparse coding can predict several classes of neurons in the auditory system, including some that have recently been reported and others that have not yet been seen.