A year later...
A while back Google came out with specifications for how to create XHR content that may be indexed by search engines. It involves pairing content in your asynchronous requests with synchronous requests that can be followed by the crawler.
http://code.google.com/web/ajaxcrawling/
No idea whether other search giants support this spec, or whether Google even does. If anybody has any knowledge about the practicality of this method this I'd love to hear about their experience..
Edit: As of today, October 14, 2015, Google has deprecated their AJAX crawling scheme:
In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. ... Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
H/T: @mark-bembnowski
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…