In Google's answer they say in paragraphs 187 to 192 that they have a license to use AFP's stories because AFP has not required their licensees to exclude robots.
This means they want to rely on the possibility of excluding the Google robot with robots.txt to fabricate a license where there is none.
This in turn is exactly what I needed to make up my mind. As discussed earlier, I was thinking about stopping to block Google's access to my content over robots.txt. While I want to keep Google from copying my works or making a derivative work (the index) from my content, doing so with "robots.txt" seems to actually help their point of view.
I have stopped blocking Google's robot in my robots.txt file as of today.
Google wants a "robots.txt exception". They want to point to the fact that anybody can easily shut them out so as to be able to violate copyright as the default.
I don't approve of that.
The default is that you have to ask first if you want to make copies, or derivative works like an index.
Let those who actually want their content to be copied and indexed by Google specify so in their "robots.txt" files. There is no "robots.txt" exception in copyright law now, and there is no need for it.Posted by Karl-Friedrich Lenz at August 9, 2005 09:24 AM