What’s the Big Deal With Social Search?

Social search is garnering a lot of attention these days, but despite all the hoopla it’s not likely to displace traditional algorithmic search any time soon.

What is social search? There isn’t even a good definition, because just about everyone who’s doing some form of social search is trying a different approach. Simply put, social search tools are internet wayfinding services informed by human judgement.

Wayfinding, because they’re not strictly search engines in the sense that most people know them. And human judgement means that at least one, but more likely dozens, hundreds or more people have “consumed” the content and have decided it’s worthy enough to recommend to others.

But that word “informed” can mean many things. In its broadest sense in this context, informed means “influenced,” and in the best of all possible worlds this influence is helpful, thoughtful and wholesome. Unfortunately, some “informed” influences on search results come from egregiously uninformed people and downright idiots.

Social search takes many forms, ranging from simple shared bookmarks or tagging of content with descriptive labels to more sophisticated approaches that combine human intelligence with computer algorithms. And despite all of the recent attention, social search isn’t really new. So why is it such a hot topic? To understand, it’s helpful to know a bit of history of human mediated search efforts.

A brief history of social search

We’ve always had social search, even from the early days of the internet. Before the emergence of the first search engines in 1993 or thereabouts, people relied on pages with links to their favorite sites. One of the first was created by web inventor Tim Berners-Lee, and it’s still online—though most of the links on the page have been long broken.

Yahoo, one of the first directories of web sites, was created by a team of human editors who surfed the web and wrote up brief descriptions of the sites they found. The Open Directory Project, the Librarians’ Index of the Internet and the U.K. based Resource Discovery Network are all directories of web sites created by people, and all have been around since the early days of the web.

You can argue that even algorithmic search engines, which rely on automated software based crawlers and indexing systems, are social search systems to a degree—after all, the software is written by people and incorporates judgments about quality, relevance and importance of web sites.

Indeed, Google’s famous PageRank algorithm, which analyzes the link structure of the web and assigns more importance to pages with many “high quality” links pointing to them, is fundamentally a form of social search. Why? Because PageRank is relying on the collective judgement of webmasters linking to other content on the web. Links, in essence, are positive votes by the webmaster community for their favorite sites.

Social search as it’s evolving today incorporates both automated software as well as human judgments about the nature of web content. That’s what makes social search intriguing—and fundamentally flawed, at least at this point.

Why? Several reasons.

Fundamentally, no matter how many people get involved with bookmarking, tagging, voting or otherwise highlighting web content, the scale and scope of the web means that most content will be unheralded by social search efforts. The web is simply growing too quickly for humans to keep up with it.

That doesn’t mean that social search efforts aren’t useful—in most cases, they are. It simply means that people-mediated search will never be as comprehensive as algorithmic search.

Another problem arises with tagging. Despite the popularity of tagging, especially with the web 2.0 mob, tags are not a panacea for categorizing and organizing the web. When used properly, tags can be very helpful in describing web content.

But problems arise with the inherent ambiguity of language—words often have multiple meanings, and people can have different interpretations of the same word.

The web lacks what librarians call a “controlled vocabulary,” a set of terms that have specific, unambiguous meanings that can be used in a uniform, consistent fashion by everyone tagging web content. Without this controlled vocabulary, tagging ultimately remains a chaotic, messy process.

Another factor is human laziness. Even if a controlled vocabulary were available, not everyone would take advantage of it. We’ve always had the ability to add tags and other metadata to our Microsoft office documents, and yet how many people do this?

We also have a problem with idiots and spammers. Some people, no matter how well intentioned, will simply do a poor job of labeling content. Others will deliberately mislabel content to attempt to fool search engines. In both cases, it’s difficult for software to recognize this spuriously labeled content. In social search, it’s difficult to filter the noise from the signal.

Despite these problems, social search still holds promise for improving our overall information seeking and consuming activities on the web. Ultimately, it’s likely that a combination of algorithmic search and the various types of social search systems will fuse into a hybrid that will work really well for a satisfying wide variety of information needs.

We’re not there yet, but I’d expect to see real progress sometime over the next couple of years.

Related reading

Search engine results: The ten year evolution
Five ways PPC customer support can help SMBs
#GoogleDoBetter The latest on internal issues at Google and Alphabet
Google Sandbox Is it still affecting new sites in 2019