Google hosted an online session with presentations from Googlers, with question-answering time too using Google moderator. Matt Cutts was there of course, and John Mueller, Kaspar Szymanski, and other notable Googlers.
I hate duplicate content more than Google, so when I can avoid it, I do! I won’t post in great detail because Google are going to make available the whole thing in a few days. That way you can experience it all yourself.
They covered topics such as the new 404 solution, and “SEO myths”, they also talked about personalisation and answered a whole bunch of interesting questions from attendees.
They quoted again that only 5% of the code online was actually valid. Browsers get used to it but valid code will serve you well as it will be more efficient on mobile devices for example.
To stop your site being indexed, you should use a robots meta tag with a noindex directive. Nofollow on the other hand is just a request and doesn’t mean that any bot will honour it. They said that you shouldn’t disallow the spiders, let them crawl, but they won’t index it if you told them not to in the correct way. The same goes for PDF. Of course it was said that you really shouldn’t put anything that you don’t want found on the web, it is after all a public domain.
As far as links go, as always, cheap and spammy links from those cheap directories will get you nowhere. You are much better have a link from a very well respected blog or news site than 100′s of those rubbish links.
They define links as editorial votes about your page, they tell Google more about it. They check on-page and off-page signals, and always go for quality over quantity. My last post was about how Google didn’t do so well in expert document tests, because quality relies on links. Their definition of quality may be different to the definition of quality put forward by the guys who did the expert vs Google experiment. They both mean “worthiness and excellence” in my opinion, just not from the same perspective.
I wanted to know how they found paid links, other than people reporting sites for using them via the spam report
, but my question didn’t pop up. They’ve really been cracking down over this issue, and it’s in their guidelines
as well. Don’t buy links, and if you do, use nofollow so they don’t get spidered and artificially inflate your rankings. There isn’t an automated method as yet that I know of. It’s not illegal to buy links, just it messes up Google’s method. Once they automate this, I think everyone had better stop buying links
Duplicate content has never been penalised, but there is a risk of one of those pages not being indexed. They say to put your preferred URL in your sitemap.
What about DMOZ? A few weeks ago they took the bit about submitting your site to DMOZ and the Yahoo directory out of the guidelines
. In Google groups, John Mueller said that they weren’t devaluing these links, they just don’t feel that they need to recommend it. During the Trick and treats event they said that DMOZ was really useful. In south-east Asian countries for example it isn’t easy to type, it’s easier to browse.
Also, if you’ve got a killer blog, definitely link it in to your site, it will increase its value.
They talked about how Live launched U Rank
that allows you to influence your rankings and share them with friends. Google said they weren’t going to do anything like that because this method creates too much noise, allowing for a messy evaluation, and also this can be manipulated easily. I think the Google personalisation from everything that they’ve published and said, is more of a private affair, like iGoogle. They’re working on natural language understanding as well, and probably generation as well, it would make sense, the 2 go together after all. Read Greg’s post
about it for more information.