Doug being random: Choosing books and weird stuff about optimization techniques
About 12 or 13 years ago I recommended a book to my dad that I had quite enjoyed. I knew there was a part toward the middle that you kind of had to slog through but I thought that it was worth it. Dad started it and about a month later I asked him what he'd been reading lately. "Nothing, really. Just haven't wanted to pick up a book and really get into it lately." That was weird. But I thought I knew what was happening. "What did you think of the book?" He hadn't enjoyed it enough to keep going. So looking at my own experience I made a guess about what was happening. "I think that what is happening is that you aren't enjoying the book but you kind of think you should read it. When you want to read something you think that if you are reading you really ought to be reading the book I recommended. But you don't like it so you end up not reading anything." He was impressed with my insight (which is probably why I remember this story in the first place!) and told me that that was what had been happening. I released him from the book.
Gradually I have come to decide that a book is not something I have any obligation to (probably Dad felt the same way: it wasn't loyalty to the book that was holding him back). There are so many excellent books out there. And there is plenty of chaff. So if I have begun one that seems to be mostly chaff I drop it. Actually, these days I tend to begin a good three different books simultaneously. The ones that gets enough of my respect to be finished win.
To a certain extent I am a believer in this strategy in life. You try a lot of things and keep the ones that work. On the other hand . . . did I already tell you the story of the neural nets? Neural nets can be used to search a parameter space for the best spot. You can give them different behaviors. If you make them completely linear they find good spots quickly, but not necessarily the best spots. They are finding local maxima. You can also add some random noise to their behavior. If you add a bit they will jump out of the local maxima and are more likely to find the true maximum. If you crank up the randomness more and more though they go crazy. They become more and more likely to reject the global maximum and search and search forever.
We have memories and a finite lifetime. So one option is to search for a set amount of time with a fair amount of randomness and then gradually turn down the randomness and settle in to something that is pretty darn good. Actually, that approach is used as well. Physicists talk about an annealing algorithm where spaces are searched by beginning with a system at a high temperature (high temperature means lots of random motion) and then gradually cooling the system, letting it gradually settle into a nice spot in the parameter space.
Perhaps this desire to learn about everything and taste a little of everything is part of an instinct for optimization. I guess I just hope to be neither so sedated that I get trapped in a local maximum or so hopped up that I can never find a place to make my home. Hopefully the mental temperature will gradually decrease leaving me in an excellent solution by the time annealing occurs.
Gradually I have come to decide that a book is not something I have any obligation to (probably Dad felt the same way: it wasn't loyalty to the book that was holding him back). There are so many excellent books out there. And there is plenty of chaff. So if I have begun one that seems to be mostly chaff I drop it. Actually, these days I tend to begin a good three different books simultaneously. The ones that gets enough of my respect to be finished win.
To a certain extent I am a believer in this strategy in life. You try a lot of things and keep the ones that work. On the other hand . . . did I already tell you the story of the neural nets? Neural nets can be used to search a parameter space for the best spot. You can give them different behaviors. If you make them completely linear they find good spots quickly, but not necessarily the best spots. They are finding local maxima. You can also add some random noise to their behavior. If you add a bit they will jump out of the local maxima and are more likely to find the true maximum. If you crank up the randomness more and more though they go crazy. They become more and more likely to reject the global maximum and search and search forever.
We have memories and a finite lifetime. So one option is to search for a set amount of time with a fair amount of randomness and then gradually turn down the randomness and settle in to something that is pretty darn good. Actually, that approach is used as well. Physicists talk about an annealing algorithm where spaces are searched by beginning with a system at a high temperature (high temperature means lots of random motion) and then gradually cooling the system, letting it gradually settle into a nice spot in the parameter space.
Perhaps this desire to learn about everything and taste a little of everything is part of an instinct for optimization. I guess I just hope to be neither so sedated that I get trapped in a local maximum or so hopped up that I can never find a place to make my home. Hopefully the mental temperature will gradually decrease leaving me in an excellent solution by the time annealing occurs.
0 Comments:
Post a Comment
<< Home