Taking a Closer look at the Agent Rank Patent and Why Trust Agents May be the Future of SEO

by

Lot’s of good stuff being written and debate taking place in the SEO community regarding agent/ author rank and it’s possible SEO implications. AJ Kohn wrote a post claiming Author Rank could be bigger than all of the Panda updates combined: Author Rank. And Barry Adams (shockingly) wrote: Why I Think the Author Rank Hype is Misguided. While I agree with AJ, I believe that the most recent Agent Rank Patent application is more than just about Author Rank; It’s about both Author Rank and Trusted Agents and that combination could be bigger than all of the Panda updates combined.

So today I’m looking at the latest filing/continuation of the Agent Rank patent application filed on August 5, 2011 – Agent Rank for further insights. For anyone interested in following the series or taking a good nap here ya go:

Agent Rank Filed August 8, 2005

Agent Rank Filed July 21, 2009

Agent Rank Filed May 11, 2011

And of course most recently: Agent Rank Filed August 5, 2011

So this is obviously something that Google has been working on for a while now.  Since ’05 there haven’t really been significant changes in the abstract of the patent nor the body or description but there have been changes in the claims of the patent and that’s what’s important to me because that’s the process that Google is trying to protect and patent.

For a further explanation of rel=author, check out this video by Matt Cutts below:

Quoting Matt Cutts from the video:

“How do we improve the  search rankings actually knowing who’s writing what on the web.”

I believe the big difference between the latest continuation of the Agent Rank application is that it morphed from being a system to identify quality authors on the web to a system that Google could utilize to improve search engine results by gathering and interpreting data gleaned from both trusted agents and trusted authors. The August Agent Rank patent continuation makes several reference to using trusted agents to improve search engine rankings and adjusting author reputation scores and that is a key difference.

Ok, so let’s briefly compare the first claim from the May 2011 Agent Rank patent app filing to the first claim from the  August 2011 patent app continuation:

First claim May 2011:

“A computer-implemented method comprising: evaluating a document that is hosted on a site, the document including a content item to which a maker of the content item has applied a digital signature; determining whether the digital signature is portable; if the digital signature is portable, using a reputation score associated with the maker in calculating a quality score for the document; and if the digital signature is not portable, using the reputation score associated with the maker in calculating the quality score for the document only if the digital signature is fixed to the site. “

First claim August 2011

“A computer-implemented method comprising: determining that a content item included in a web resource has been endorsed by one or more trusted agents, wherein the content item has a maker, wherein the maker is an agent having a reputation score; and in response to determining that the content item has been endorsed by one or more trusted agents, adjusting the reputation score associated with the maker of the content item.”

What stood out to me was the fact that the claim mentions that trusted agents can be used to adjust the reputation score and not just adjust it but increase the reputation score. From the second claim:

“The method of claim 1, wherein adjusting the reputation score comprises increasing the reputation score.”

Things start to get real interesting after that as we start to see some signs of what Google may use to adjust an Author Rank reputation score. Claims five through eight make references to the quantity, strength, timing and consistency of endorsements being used to adjust the reputation scores of authors.

“…determining a quantity of endorsements of the content item by the one or more trusted agents…

“…determining a quantity of endorsements of the content item comprises determining a quantity of strong endorsements of the content item…

“…determining a quantity of endorsements, and a timing of the endorsements…and determining, from the quantity and timing of the endorsements, that the content item has been consistently endorsed”

Apparently there are some aspects of agent rank that have not been unleashed yet. For instance what is a strong endorsement? The following quote from the detailed description tells us a little more…

“In one implementation, the digital signature can include within the scope of the content signed other metadata such as creation date, review score, or recommended keywords for search.”

That last bit about recommending keywords for search really got my mind going (does this remind anyone of the “Combating web spam with Trust Rank” paper written several years ago? The bit about using trusted sources to create seed sets of other good sources) and I’ll speak on it further below but let’s continue looking at the claims for now.

Claim #9 is my favorite claim on this patent application continuation because it’s as ambiguous as one can get but yet alludes to the notion that Google may only be using certain trusted agents to affect their results.

“The method of claim 1, comprising: pre-selecting the trusted agents based on one or more criteria. “

According to later claims in the patent, Google may also be able to utilize Agent Rank to solve a problem that they’ve been unable to solve for some time now and that is the problem of proper attribution. Check out claim #15:

“wherein identifying the maker comprises: determining that multiple agents have signed the content item as the maker; determining that two or more of the agents are trusted agents; determining, for each of the trusted agents, a time that the trusted agent has signed the content item; selecting the earliest time; and selecting the trusted agent that has purported to have made the content item at the earliest time, as the maker of the content item.”

OK, let’s wrap this up shall we?

One of the biggest problems Google has had since the beginning of time (other then proper attribution) is their inability to properly judge  links and I believe Agent Rank is a good step in the right direction towards remedying that problem.

Check out the following: (emphasis is mine)

“Assuming that a given agent has a high reputational score, representing an established reputation for authoring valuable content, then additional content authored and signed by that agent will be promoted relative to unsigned content or content from less reputable agents in search results. Similarly, if the signer has a large reputational score due to the agent having an established reputation for providing accurate reviews, the rank of the referenced content can be raised accordingly.

Did you see that last bit there? Lemme repeat: “if the signer has a large reputational score due to the agent having an established reputation for providing accurate reviews, the rank of the referenced content can be raised accordingly.” Notice no reference was made to increasing reputation scores or anything like that. It simply stated if the agent had established a good reputation for providing accurate reviews that the rank of the referenced content can be raised accordingly. Interesting doncha think?

And that brings me back to the question I asked earlier which was: Does this remind anyone of the same process that was used/described in the Combating Web Spam with TrustRank paper? In that paper the following process is described (emphasis mine):

“To discover good pages without invoking the oracle
function on the entire web, we will rely on an important
empirical observation we call the approximate isolation of
the good set: good pages seldom point to bad ones. This
notion is fairly intuitive—bad pages are built to mislead
search engines, not to provide useful information. Therefore,
people creating good pages have little reason to point
to bad pages.”

Replace the word “pages” with “agents” and in my opinion you have a much better methodology for finding “good” documents. The question or doubt I’ve always held in my mind about Google being able to use Google Plus, Author Rank and Agent Rank as much stronger ranking signals has always been acceptance of the masses.

We’ve all scoffed and laughed at Google’s inability to gain traction with Google Plus but what if they don’t need total acceptance? What if they just need a seed set of good agents that they can use as a foundation to build upon to find other trusted agents and documents similarly to how they’ve used the Yahoo directory and DMOZ in the past and as described in the Trust Rank paper? Agents that Google has verified as a “trusted agent” whose endorsements they can trust to lead them to other quality documents and improve the rankings of said documents. Again, this is a formula that one could debate they have used to find good content in the past as described in the Trust Rank paper. And to hammer home that point, a quote from the Agent Rank patent application:

“In an alternative implementation, a seed group of trusted agents can be pre-selected, and the agents within this seed group can endorse other content. Agents whose content receives consistently strong endorsements can gain reputation. In either implementation, the agent’s reputation ultimately depends on the quality of the content which they sign. “

Conclusion

Personally I don’t believe this Agent Rank patent application is about Author Rank so much it is about Trusted Agents and incorporating the social graph and the endorsements from trusted agents into search results. In my opinion that’s what makes this patent application patentable and unique. I believe one day endorsements from trusted agents will be a stronger ranking signal than links without such attributions. I think any SEO could argue that for Google’s results to improve they will have to reduce their dependence on the link graph as judging links has proved futile. Many have argued that the social graph is the way to go but how do you know who to trust?

David Harry, aka the Gypsy of the SEO Training Dojo, has been talking about and harping on [waves to Dave] entities for some time now and I believe entities and trusted agents will eventually replace unsigned and/or non-endorsed web pages and links as the dominant ranking signal in Google’s algorithms. In my opinion, it only makes sense since the signal from inbound links has become so noisy and the fact that Google would be able to verify that the endorsements are coming from real people that they trust makes it a no-brainer in my book if the motivation is there. And with Google getting better and better at identifying entities online, how much longer before they can associate a person/entity across multiple social accounts without a digital signature? Justin Briggs in Building the Implicit Social Graph, makes a good argument that Google is already building an implicit social graph and has been doing so for some time.

Do you see what I’m seeing here? If Google is already building an implicit social graph and they can find trusted agents through Google Plus… how much longer before the social graph becomes a bigger player in search engine results?

I believe Author Rank will one day be used to influence rankings and that content signed by reputable authors will outrank unsigned content. I also believe trusted agents and their “endorsements” will be used to further influence search rankings. My only question is whether or not Google has the motivation and resources to move away from the link graph and stop attempting to fix a broken system.

Related posts:

  1. Is This The Panda Patent?
  2. Why You Should Consult With An SEO Pro Before Optimizing Your Website
  3. Using Competitive Analysis To Build Links
  4. How Viral Marketing Can Be Meaningless
  5. How Hiring the Cheapest SEO Could Cost You Your Website

Leave a Comment