I have trouble understanding when people actually listen to podcasts. I don’t mean the funny and musical ones, you could listen to those while doing the dishes, but I mean the deep technical talks or interviews. The ones you actually need to pay attention to, to understand and learn. I am not good at multitasking while listening intently and I don’t ride the bus or get stuck in traffic.
Ironically I started thinking about this while listening to a podcast of an ACM Queue interview with me. Mike Vizard runs an interview series called QueueCasts which premiered this January with the interview with me and also one with Rob Gingel of Cassatt. This was already the second podcast interview I had done in a short term, the earlier one being with Halley Suitt (now CEO of Top Ten Sources) in IT Conversations/Memory Lane. When listening to my own interviews I continuously want to hit the fast-forward button, but of course that could be because I am rather boring.
I believe my main problem with the format is you are supposed to use it linearly. I love to read articles, papers, books, etc., but I am often a non-linear reader. I will scan back and forth for interesting pieces. The fact that you cannot build a hierarchical model of a podcast for selective drill down is pretty annoying to me. Maybe I am suffering from an adult form of ADD, but the few times that I have tried to listen to podcast interviews, I find myself wishing that they had written it down instead of putting it in audio. Maybe services such as Casting Words are going to help, they use Mechanical Turk for transcribing podcasts (see Jeff Barr’s example transcript), but it would be great to see more structure around it.
It is not that I want everything written down. Jon Udell is starting a new screencast series, The Screening Room, where he reviews new software, and there would be only limited value in a written version. I do find myself flipping forward to the next screen occasionally, so I believe more structure would be absolutely helpful.